00:00:00.000 Started by upstream project "autotest-nightly" build number 4239 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3602 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.182 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.183 The recommended git tool is: git 00:00:00.183 using credential 00000000-0000-0000-0000-000000000002 00:00:00.185 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.229 Fetching changes from the remote Git repository 00:00:00.231 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.265 Using shallow fetch with depth 1 00:00:00.265 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.265 > git --version # timeout=10 00:00:00.290 > git --version # 'git version 2.39.2' 00:00:00.290 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.309 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.309 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.322 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.335 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.347 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:08.348 > git config core.sparsecheckout # timeout=10 00:00:08.359 > git read-tree -mu HEAD # timeout=10 00:00:08.377 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:08.398 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:08.398 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:08.489 [Pipeline] Start of Pipeline 00:00:08.501 [Pipeline] library 00:00:08.503 Loading library shm_lib@master 00:00:08.503 Library shm_lib@master is cached. Copying from home. 00:00:08.519 [Pipeline] node 00:00:08.530 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.532 [Pipeline] { 00:00:08.542 [Pipeline] catchError 00:00:08.543 [Pipeline] { 00:00:08.553 [Pipeline] wrap 00:00:08.562 [Pipeline] { 00:00:08.573 [Pipeline] stage 00:00:08.575 [Pipeline] { (Prologue) 00:00:08.595 [Pipeline] echo 00:00:08.597 Node: VM-host-SM38 00:00:08.605 [Pipeline] cleanWs 00:00:08.617 [WS-CLEANUP] Deleting project workspace... 00:00:08.617 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.624 [WS-CLEANUP] done 00:00:08.846 [Pipeline] setCustomBuildProperty 00:00:08.939 [Pipeline] httpRequest 00:00:09.665 [Pipeline] echo 00:00:09.667 Sorcerer 10.211.164.101 is alive 00:00:09.678 [Pipeline] retry 00:00:09.680 [Pipeline] { 00:00:09.692 [Pipeline] httpRequest 00:00:09.696 HttpMethod: GET 00:00:09.697 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.698 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.719 Response Code: HTTP/1.1 200 OK 00:00:09.719 Success: Status code 200 is in the accepted range: 200,404 00:00:09.720 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:34.636 [Pipeline] } 00:00:34.653 [Pipeline] // retry 00:00:34.661 [Pipeline] sh 00:00:34.951 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:34.968 [Pipeline] httpRequest 00:00:35.575 [Pipeline] echo 00:00:35.577 Sorcerer 10.211.164.101 is alive 00:00:35.587 [Pipeline] retry 00:00:35.588 [Pipeline] { 00:00:35.603 [Pipeline] httpRequest 00:00:35.608 HttpMethod: GET 00:00:35.609 URL: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:35.609 Sending request to url: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:35.625 Response Code: HTTP/1.1 200 OK 00:00:35.626 Success: Status code 200 is in the accepted range: 200,404 00:00:35.626 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:54.036 [Pipeline] } 00:00:54.054 [Pipeline] // retry 00:00:54.061 [Pipeline] sh 00:00:54.350 + tar --no-same-owner -xf spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:57.671 [Pipeline] sh 00:00:57.957 + git -C spdk log --oneline -n5 00:00:57.957 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:00:57.957 12fc2abf1 test: Remove autopackage.sh 00:00:57.957 83ba90867 fio/bdev: fix typo in README 00:00:57.957 45379ed84 module/compress: Cleanup vol data, when claim fails 00:00:57.957 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:00:57.986 [Pipeline] writeFile 00:00:58.022 [Pipeline] sh 00:00:58.310 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.323 [Pipeline] sh 00:00:58.608 + cat autorun-spdk.conf 00:00:58.608 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.608 SPDK_TEST_NVME=1 00:00:58.608 SPDK_TEST_FTL=1 00:00:58.608 SPDK_TEST_ISAL=1 00:00:58.608 SPDK_RUN_ASAN=1 00:00:58.608 SPDK_RUN_UBSAN=1 00:00:58.608 SPDK_TEST_XNVME=1 00:00:58.608 SPDK_TEST_NVME_FDP=1 00:00:58.608 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.617 RUN_NIGHTLY=1 00:00:58.619 [Pipeline] } 00:00:58.633 [Pipeline] // stage 00:00:58.647 [Pipeline] stage 00:00:58.649 [Pipeline] { (Run VM) 00:00:58.662 [Pipeline] sh 00:00:58.946 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.946 + echo 'Start stage prepare_nvme.sh' 00:00:58.946 Start stage prepare_nvme.sh 00:00:58.946 + [[ -n 8 ]] 00:00:58.946 + disk_prefix=ex8 00:00:58.946 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:58.946 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:58.946 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:58.946 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.946 ++ SPDK_TEST_NVME=1 00:00:58.946 ++ SPDK_TEST_FTL=1 00:00:58.946 ++ SPDK_TEST_ISAL=1 00:00:58.946 ++ SPDK_RUN_ASAN=1 00:00:58.946 ++ SPDK_RUN_UBSAN=1 00:00:58.946 ++ SPDK_TEST_XNVME=1 00:00:58.946 ++ SPDK_TEST_NVME_FDP=1 00:00:58.946 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.946 ++ RUN_NIGHTLY=1 00:00:58.946 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:58.946 + nvme_files=() 00:00:58.946 + declare -A nvme_files 00:00:58.946 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.946 + nvme_files['nvme.img']=5G 00:00:58.946 + nvme_files['nvme-cmb.img']=5G 00:00:58.946 + nvme_files['nvme-multi0.img']=4G 00:00:58.946 + nvme_files['nvme-multi1.img']=4G 00:00:58.946 + nvme_files['nvme-multi2.img']=4G 00:00:58.946 + nvme_files['nvme-openstack.img']=8G 00:00:58.946 + nvme_files['nvme-zns.img']=5G 00:00:58.946 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.946 + (( SPDK_TEST_FTL == 1 )) 00:00:58.946 + nvme_files["nvme-ftl.img"]=6G 00:00:58.946 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.946 + nvme_files["nvme-fdp.img"]=1G 00:00:58.946 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.946 + for nvme in "${!nvme_files[@]}" 00:00:58.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:00:58.946 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.946 + for nvme in "${!nvme_files[@]}" 00:00:58.946 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:00:59.893 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:59.893 + for nvme in "${!nvme_files[@]}" 00:00:59.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:00:59.893 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.893 + for nvme in "${!nvme_files[@]}" 00:00:59.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:00:59.893 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:59.893 + for nvme in "${!nvme_files[@]}" 00:00:59.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:00.466 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.466 + for nvme in "${!nvme_files[@]}" 00:01:00.466 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:00.466 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.466 + for nvme in "${!nvme_files[@]}" 00:01:00.466 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:00.466 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.466 + for nvme in "${!nvme_files[@]}" 00:01:00.466 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:00.728 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:00.728 + for nvme in "${!nvme_files[@]}" 00:01:00.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:01.303 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.303 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:01.303 + echo 'End stage prepare_nvme.sh' 00:01:01.303 End stage prepare_nvme.sh 00:01:01.317 [Pipeline] sh 00:01:01.605 + DISTRO=fedora39 00:01:01.605 + CPUS=10 00:01:01.605 + RAM=12288 00:01:01.605 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.605 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:01.605 00:01:01.605 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:01.605 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:01.605 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:01.605 HELP=0 00:01:01.605 DRY_RUN=0 00:01:01.605 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:01.605 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:01.605 NVME_AUTO_CREATE=0 00:01:01.605 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:01.605 NVME_CMB=,,,, 00:01:01.605 NVME_PMR=,,,, 00:01:01.605 NVME_ZNS=,,,, 00:01:01.605 NVME_MS=true,,,, 00:01:01.605 NVME_FDP=,,,on, 00:01:01.605 SPDK_VAGRANT_DISTRO=fedora39 00:01:01.605 SPDK_VAGRANT_VMCPU=10 00:01:01.605 SPDK_VAGRANT_VMRAM=12288 00:01:01.605 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.605 SPDK_VAGRANT_HTTP_PROXY= 00:01:01.605 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.605 SPDK_OPENSTACK_NETWORK=0 00:01:01.605 VAGRANT_PACKAGE_BOX=0 00:01:01.605 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.605 FORCE_DISTRO=true 00:01:01.605 VAGRANT_BOX_VERSION= 00:01:01.605 EXTRA_VAGRANTFILES= 00:01:01.605 NIC_MODEL=e1000 00:01:01.605 00:01:01.605 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:01.605 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:04.157 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.420 ==> default: Creating image (snapshot of base box volume). 00:01:04.682 ==> default: Creating domain with the following settings... 00:01:04.682 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730607687_4dc3172a9860ffafbe3f 00:01:04.682 ==> default: -- Domain type: kvm 00:01:04.682 ==> default: -- Cpus: 10 00:01:04.682 ==> default: -- Feature: acpi 00:01:04.682 ==> default: -- Feature: apic 00:01:04.682 ==> default: -- Feature: pae 00:01:04.682 ==> default: -- Memory: 12288M 00:01:04.682 ==> default: -- Memory Backing: hugepages: 00:01:04.682 ==> default: -- Management MAC: 00:01:04.682 ==> default: -- Loader: 00:01:04.682 ==> default: -- Nvram: 00:01:04.682 ==> default: -- Base box: spdk/fedora39 00:01:04.682 ==> default: -- Storage pool: default 00:01:04.682 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730607687_4dc3172a9860ffafbe3f.img (20G) 00:01:04.682 ==> default: -- Volume Cache: default 00:01:04.682 ==> default: -- Kernel: 00:01:04.682 ==> default: -- Initrd: 00:01:04.682 ==> default: -- Graphics Type: vnc 00:01:04.682 ==> default: -- Graphics Port: -1 00:01:04.682 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.682 ==> default: -- Graphics Password: Not defined 00:01:04.682 ==> default: -- Video Type: cirrus 00:01:04.682 ==> default: -- Video VRAM: 9216 00:01:04.682 ==> default: -- Sound Type: 00:01:04.682 ==> default: -- Keymap: en-us 00:01:04.682 ==> default: -- TPM Path: 00:01:04.682 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.682 ==> default: -- Command line args: 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:04.682 ==> default: -> value=-drive, 00:01:04.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:04.682 ==> default: -> value=-drive, 00:01:04.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:04.682 ==> default: -> value=-drive, 00:01:04.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.682 ==> default: -> value=-drive, 00:01:04.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.682 ==> default: -> value=-drive, 00:01:04.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:04.682 ==> default: -> value=-drive, 00:01:04.682 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:04.682 ==> default: -> value=-device, 00:01:04.682 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.682 ==> default: Creating shared folders metadata... 00:01:04.944 ==> default: Starting domain. 00:01:07.499 ==> default: Waiting for domain to get an IP address... 00:01:25.635 ==> default: Waiting for SSH to become available... 00:01:25.635 ==> default: Configuring and enabling network interfaces... 00:01:28.972 default: SSH address: 192.168.121.63:22 00:01:28.972 default: SSH username: vagrant 00:01:28.972 default: SSH auth method: private key 00:01:30.888 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:39.043 ==> default: Mounting SSHFS shared folder... 00:01:40.954 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:40.954 ==> default: Checking Mount.. 00:01:42.337 ==> default: Folder Successfully Mounted! 00:01:42.337 00:01:42.337 SUCCESS! 00:01:42.337 00:01:42.337 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:42.337 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:42.337 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:42.337 00:01:42.348 [Pipeline] } 00:01:42.365 [Pipeline] // stage 00:01:42.374 [Pipeline] dir 00:01:42.375 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:42.376 [Pipeline] { 00:01:42.389 [Pipeline] catchError 00:01:42.391 [Pipeline] { 00:01:42.407 [Pipeline] sh 00:01:42.691 + vagrant ssh-config --host vagrant 00:01:42.691 + sed -ne '/^Host/,$p' 00:01:42.691 + tee ssh_conf 00:01:45.237 Host vagrant 00:01:45.237 HostName 192.168.121.63 00:01:45.237 User vagrant 00:01:45.237 Port 22 00:01:45.238 UserKnownHostsFile /dev/null 00:01:45.238 StrictHostKeyChecking no 00:01:45.238 PasswordAuthentication no 00:01:45.238 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:45.238 IdentitiesOnly yes 00:01:45.238 LogLevel FATAL 00:01:45.238 ForwardAgent yes 00:01:45.238 ForwardX11 yes 00:01:45.238 00:01:45.252 [Pipeline] withEnv 00:01:45.255 [Pipeline] { 00:01:45.268 [Pipeline] sh 00:01:45.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:45.551 source /etc/os-release 00:01:45.551 [[ -e /image.version ]] && img=$(< /image.version) 00:01:45.551 # Minimal, systemd-like check. 00:01:45.551 if [[ -e /.dockerenv ]]; then 00:01:45.551 # Clear garbage from the node'\''s name: 00:01:45.551 # agt-er_autotest_547-896 -> autotest_547-896 00:01:45.551 # $HOSTNAME is the actual container id 00:01:45.551 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:45.551 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:45.551 # We can assume this is a mount from a host where container is running, 00:01:45.551 # so fetch its hostname to easily identify the target swarm worker. 00:01:45.551 container="$(< /etc/hostname) ($agent)" 00:01:45.551 else 00:01:45.551 # Fallback 00:01:45.551 container=$agent 00:01:45.551 fi 00:01:45.551 fi 00:01:45.551 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:45.551 ' 00:01:45.825 [Pipeline] } 00:01:45.837 [Pipeline] // withEnv 00:01:45.844 [Pipeline] setCustomBuildProperty 00:01:45.854 [Pipeline] stage 00:01:45.856 [Pipeline] { (Tests) 00:01:45.870 [Pipeline] sh 00:01:46.155 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:46.427 [Pipeline] sh 00:01:46.711 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:46.990 [Pipeline] timeout 00:01:46.991 Timeout set to expire in 50 min 00:01:46.992 [Pipeline] { 00:01:47.005 [Pipeline] sh 00:01:47.290 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:47.862 HEAD is now at fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:47.877 [Pipeline] sh 00:01:48.173 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:48.451 [Pipeline] sh 00:01:48.737 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:49.015 [Pipeline] sh 00:01:49.299 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:49.560 ++ readlink -f spdk_repo 00:01:49.560 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:49.560 + [[ -n /home/vagrant/spdk_repo ]] 00:01:49.560 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:49.560 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:49.560 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:49.560 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:49.560 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:49.560 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:49.560 + cd /home/vagrant/spdk_repo 00:01:49.560 + source /etc/os-release 00:01:49.560 ++ NAME='Fedora Linux' 00:01:49.560 ++ VERSION='39 (Cloud Edition)' 00:01:49.560 ++ ID=fedora 00:01:49.560 ++ VERSION_ID=39 00:01:49.560 ++ VERSION_CODENAME= 00:01:49.560 ++ PLATFORM_ID=platform:f39 00:01:49.560 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:49.560 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:49.560 ++ LOGO=fedora-logo-icon 00:01:49.560 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:49.560 ++ HOME_URL=https://fedoraproject.org/ 00:01:49.560 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:49.560 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:49.560 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:49.560 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:49.560 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:49.560 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:49.560 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:49.560 ++ SUPPORT_END=2024-11-12 00:01:49.560 ++ VARIANT='Cloud Edition' 00:01:49.560 ++ VARIANT_ID=cloud 00:01:49.560 + uname -a 00:01:49.560 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:49.560 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:49.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:50.081 Hugepages 00:01:50.081 node hugesize free / total 00:01:50.081 node0 1048576kB 0 / 0 00:01:50.081 node0 2048kB 0 / 0 00:01:50.081 00:01:50.081 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.342 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:50.342 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:50.342 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:50.342 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:50.342 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:50.342 + rm -f /tmp/spdk-ld-path 00:01:50.342 + source autorun-spdk.conf 00:01:50.342 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.342 ++ SPDK_TEST_NVME=1 00:01:50.342 ++ SPDK_TEST_FTL=1 00:01:50.342 ++ SPDK_TEST_ISAL=1 00:01:50.342 ++ SPDK_RUN_ASAN=1 00:01:50.342 ++ SPDK_RUN_UBSAN=1 00:01:50.342 ++ SPDK_TEST_XNVME=1 00:01:50.342 ++ SPDK_TEST_NVME_FDP=1 00:01:50.342 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.342 ++ RUN_NIGHTLY=1 00:01:50.342 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:50.342 + [[ -n '' ]] 00:01:50.342 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:50.342 + for M in /var/spdk/build-*-manifest.txt 00:01:50.342 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:50.342 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.342 + for M in /var/spdk/build-*-manifest.txt 00:01:50.342 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:50.342 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.342 + for M in /var/spdk/build-*-manifest.txt 00:01:50.342 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:50.342 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.342 ++ uname 00:01:50.342 + [[ Linux == \L\i\n\u\x ]] 00:01:50.342 + sudo dmesg -T 00:01:50.342 + sudo dmesg --clear 00:01:50.342 + dmesg_pid=5028 00:01:50.342 + [[ Fedora Linux == FreeBSD ]] 00:01:50.342 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.342 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.342 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:50.342 + [[ -x /usr/src/fio-static/fio ]] 00:01:50.342 + sudo dmesg -Tw 00:01:50.342 + export FIO_BIN=/usr/src/fio-static/fio 00:01:50.342 + FIO_BIN=/usr/src/fio-static/fio 00:01:50.342 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:50.342 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:50.342 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:50.342 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.342 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.342 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:50.342 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.342 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.342 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.603 04:22:13 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:50.603 04:22:13 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.603 04:22:13 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:50.603 04:22:13 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:50.603 04:22:13 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.603 04:22:13 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:50.603 04:22:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:50.603 04:22:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:50.603 04:22:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:50.603 04:22:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:50.603 04:22:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:50.603 04:22:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.603 04:22:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.603 04:22:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.603 04:22:13 -- paths/export.sh@5 -- $ export PATH 00:01:50.603 04:22:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:50.603 04:22:13 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:50.603 04:22:13 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:50.603 04:22:13 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730607733.XXXXXX 00:01:50.603 04:22:13 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730607733.vbY5ko 00:01:50.603 04:22:13 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:50.603 04:22:13 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:50.603 04:22:13 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:50.603 04:22:13 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:50.603 04:22:13 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:50.603 04:22:13 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:50.603 04:22:13 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:50.603 04:22:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.603 04:22:13 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:50.603 04:22:13 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:50.603 04:22:13 -- pm/common@17 -- $ local monitor 00:01:50.603 04:22:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.603 04:22:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.603 04:22:13 -- pm/common@25 -- $ sleep 1 00:01:50.603 04:22:13 -- pm/common@21 -- $ date +%s 00:01:50.603 04:22:13 -- pm/common@21 -- $ date +%s 00:01:50.603 04:22:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730607733 00:01:50.603 04:22:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730607733 00:01:50.603 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730607733_collect-cpu-load.pm.log 00:01:50.603 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730607733_collect-vmstat.pm.log 00:01:51.548 04:22:14 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:51.548 04:22:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:51.548 04:22:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:51.548 04:22:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:51.548 04:22:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:51.548 Sun Nov 3 04:22:14 AM UTC 2024 00:01:51.548 04:22:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:51.548 v25.01-pre-124-gfa3ab7384 00:01:51.548 04:22:14 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:51.548 04:22:14 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:51.548 04:22:14 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:51.548 04:22:14 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:51.548 04:22:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.548 ************************************ 00:01:51.548 START TEST asan 00:01:51.548 ************************************ 00:01:51.548 using asan 00:01:51.548 04:22:14 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:51.548 00:01:51.548 real 0m0.000s 00:01:51.548 user 0m0.000s 00:01:51.548 sys 0m0.000s 00:01:51.548 ************************************ 00:01:51.548 END TEST asan 00:01:51.548 ************************************ 00:01:51.548 04:22:14 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:51.548 04:22:14 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:51.811 04:22:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:51.811 04:22:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:51.811 04:22:14 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:51.811 04:22:14 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:51.811 04:22:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.811 ************************************ 00:01:51.811 START TEST ubsan 00:01:51.811 ************************************ 00:01:51.811 using ubsan 00:01:51.811 04:22:14 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:51.811 00:01:51.811 real 0m0.000s 00:01:51.811 user 0m0.000s 00:01:51.811 sys 0m0.000s 00:01:51.811 ************************************ 00:01:51.811 END TEST ubsan 00:01:51.811 ************************************ 00:01:51.811 04:22:14 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:51.811 04:22:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:51.811 04:22:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:51.811 04:22:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:51.811 04:22:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:51.811 04:22:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:51.811 04:22:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:51.811 04:22:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:51.811 04:22:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:51.811 04:22:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:51.811 04:22:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:51.811 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:51.811 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:52.383 Using 'verbs' RDMA provider 00:02:05.560 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:15.593 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:15.593 Creating mk/config.mk...done. 00:02:15.593 Creating mk/cc.flags.mk...done. 00:02:15.593 Type 'make' to build. 00:02:15.593 04:22:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:15.593 04:22:38 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:15.593 04:22:38 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:15.593 04:22:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.593 ************************************ 00:02:15.593 START TEST make 00:02:15.593 ************************************ 00:02:15.593 04:22:38 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:15.855 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:15.855 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:15.855 meson setup builddir \ 00:02:15.855 -Dwith-libaio=enabled \ 00:02:15.855 -Dwith-liburing=enabled \ 00:02:15.855 -Dwith-libvfn=disabled \ 00:02:15.855 -Dwith-spdk=disabled \ 00:02:15.855 -Dexamples=false \ 00:02:15.855 -Dtests=false \ 00:02:15.855 -Dtools=false && \ 00:02:15.855 meson compile -C builddir && \ 00:02:15.855 cd -) 00:02:15.855 make[1]: Nothing to be done for 'all'. 00:02:18.401 The Meson build system 00:02:18.401 Version: 1.5.0 00:02:18.401 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:18.401 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:18.401 Build type: native build 00:02:18.401 Project name: xnvme 00:02:18.401 Project version: 0.7.5 00:02:18.401 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.401 C linker for the host machine: cc ld.bfd 2.40-14 00:02:18.401 Host machine cpu family: x86_64 00:02:18.401 Host machine cpu: x86_64 00:02:18.401 Message: host_machine.system: linux 00:02:18.401 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:18.401 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:18.401 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:18.401 Run-time dependency threads found: YES 00:02:18.401 Has header "setupapi.h" : NO 00:02:18.401 Has header "linux/blkzoned.h" : YES 00:02:18.402 Has header "linux/blkzoned.h" : YES (cached) 00:02:18.402 Has header "libaio.h" : YES 00:02:18.402 Library aio found: YES 00:02:18.402 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.402 Run-time dependency liburing found: YES 2.2 00:02:18.402 Dependency libvfn skipped: feature with-libvfn disabled 00:02:18.402 Found CMake: /usr/bin/cmake (3.27.7) 00:02:18.402 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:18.402 Subproject spdk : skipped: feature with-spdk disabled 00:02:18.402 Run-time dependency appleframeworks found: NO (tried framework) 00:02:18.402 Run-time dependency appleframeworks found: NO (tried framework) 00:02:18.402 Library rt found: YES 00:02:18.402 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:18.402 Configuring xnvme_config.h using configuration 00:02:18.402 Configuring xnvme.spec using configuration 00:02:18.402 Run-time dependency bash-completion found: YES 2.11 00:02:18.402 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:18.402 Program cp found: YES (/usr/bin/cp) 00:02:18.402 Build targets in project: 3 00:02:18.402 00:02:18.402 xnvme 0.7.5 00:02:18.402 00:02:18.402 Subprojects 00:02:18.402 spdk : NO Feature 'with-spdk' disabled 00:02:18.402 00:02:18.402 User defined options 00:02:18.402 examples : false 00:02:18.402 tests : false 00:02:18.402 tools : false 00:02:18.402 with-libaio : enabled 00:02:18.402 with-liburing: enabled 00:02:18.402 with-libvfn : disabled 00:02:18.402 with-spdk : disabled 00:02:18.402 00:02:18.402 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.402 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:18.402 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:18.662 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:18.662 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:18.662 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:18.662 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:18.662 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:18.662 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:18.662 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:18.662 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:18.662 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:18.662 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:18.662 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:18.662 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:18.662 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:18.662 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:18.662 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:18.662 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:18.663 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:18.923 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:18.923 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:18.923 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:18.923 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:18.923 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:18.923 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:18.923 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:18.923 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:18.923 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:18.923 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:18.923 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:18.923 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:18.923 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:18.923 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:18.923 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:18.923 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:18.923 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:18.923 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:18.923 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:18.923 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:18.923 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:18.923 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:18.923 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:18.923 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:18.923 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:18.923 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:18.923 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:18.923 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:18.923 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:18.923 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:18.923 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:18.923 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:18.923 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:18.923 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:18.923 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:19.183 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:19.183 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:19.183 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:19.183 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:19.183 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:19.183 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:19.183 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:19.183 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:19.183 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:19.183 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:19.183 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:19.183 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:19.183 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:19.183 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:19.183 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:19.183 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:19.444 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:19.444 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:19.444 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:19.444 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:19.705 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:19.705 [75/76] Linking static target lib/libxnvme.a 00:02:19.705 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:19.705 INFO: autodetecting backend as ninja 00:02:19.705 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:19.966 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:28.103 The Meson build system 00:02:28.103 Version: 1.5.0 00:02:28.103 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:28.103 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:28.103 Build type: native build 00:02:28.103 Program cat found: YES (/usr/bin/cat) 00:02:28.103 Project name: DPDK 00:02:28.103 Project version: 24.03.0 00:02:28.103 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:28.103 C linker for the host machine: cc ld.bfd 2.40-14 00:02:28.103 Host machine cpu family: x86_64 00:02:28.103 Host machine cpu: x86_64 00:02:28.103 Message: ## Building in Developer Mode ## 00:02:28.103 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.103 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:28.103 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.103 Program python3 found: YES (/usr/bin/python3) 00:02:28.103 Program cat found: YES (/usr/bin/cat) 00:02:28.103 Compiler for C supports arguments -march=native: YES 00:02:28.103 Checking for size of "void *" : 8 00:02:28.103 Checking for size of "void *" : 8 (cached) 00:02:28.103 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:28.103 Library m found: YES 00:02:28.103 Library numa found: YES 00:02:28.103 Has header "numaif.h" : YES 00:02:28.103 Library fdt found: NO 00:02:28.103 Library execinfo found: NO 00:02:28.103 Has header "execinfo.h" : YES 00:02:28.103 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:28.103 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.103 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.103 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.103 Run-time dependency openssl found: YES 3.1.1 00:02:28.103 Run-time dependency libpcap found: YES 1.10.4 00:02:28.103 Has header "pcap.h" with dependency libpcap: YES 00:02:28.103 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.103 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.103 Compiler for C supports arguments -Wformat: YES 00:02:28.103 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.103 Compiler for C supports arguments -Wformat-security: NO 00:02:28.103 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.103 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.103 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.103 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.103 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.103 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.103 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.103 Compiler for C supports arguments -Wundef: YES 00:02:28.103 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.103 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.103 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.103 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.103 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.103 Program objdump found: YES (/usr/bin/objdump) 00:02:28.103 Compiler for C supports arguments -mavx512f: YES 00:02:28.103 Checking if "AVX512 checking" compiles: YES 00:02:28.103 Fetching value of define "__SSE4_2__" : 1 00:02:28.103 Fetching value of define "__AES__" : 1 00:02:28.103 Fetching value of define "__AVX__" : 1 00:02:28.103 Fetching value of define "__AVX2__" : 1 00:02:28.103 Fetching value of define "__AVX512BW__" : 1 00:02:28.103 Fetching value of define "__AVX512CD__" : 1 00:02:28.103 Fetching value of define "__AVX512DQ__" : 1 00:02:28.103 Fetching value of define "__AVX512F__" : 1 00:02:28.103 Fetching value of define "__AVX512VL__" : 1 00:02:28.103 Fetching value of define "__PCLMUL__" : 1 00:02:28.103 Fetching value of define "__RDRND__" : 1 00:02:28.103 Fetching value of define "__RDSEED__" : 1 00:02:28.103 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:28.103 Fetching value of define "__znver1__" : (undefined) 00:02:28.103 Fetching value of define "__znver2__" : (undefined) 00:02:28.103 Fetching value of define "__znver3__" : (undefined) 00:02:28.103 Fetching value of define "__znver4__" : (undefined) 00:02:28.103 Library asan found: YES 00:02:28.103 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.103 Message: lib/log: Defining dependency "log" 00:02:28.103 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.103 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.103 Library rt found: YES 00:02:28.103 Checking for function "getentropy" : NO 00:02:28.103 Message: lib/eal: Defining dependency "eal" 00:02:28.103 Message: lib/ring: Defining dependency "ring" 00:02:28.103 Message: lib/rcu: Defining dependency "rcu" 00:02:28.103 Message: lib/mempool: Defining dependency "mempool" 00:02:28.103 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.103 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.103 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.103 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.103 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.103 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:28.103 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:28.103 Compiler for C supports arguments -mpclmul: YES 00:02:28.103 Compiler for C supports arguments -maes: YES 00:02:28.103 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.103 Compiler for C supports arguments -mavx512bw: YES 00:02:28.103 Compiler for C supports arguments -mavx512dq: YES 00:02:28.103 Compiler for C supports arguments -mavx512vl: YES 00:02:28.103 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.103 Compiler for C supports arguments -mavx2: YES 00:02:28.103 Compiler for C supports arguments -mavx: YES 00:02:28.103 Message: lib/net: Defining dependency "net" 00:02:28.103 Message: lib/meter: Defining dependency "meter" 00:02:28.103 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.103 Message: lib/pci: Defining dependency "pci" 00:02:28.103 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.103 Message: lib/hash: Defining dependency "hash" 00:02:28.103 Message: lib/timer: Defining dependency "timer" 00:02:28.103 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.103 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.103 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.103 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.103 Message: lib/power: Defining dependency "power" 00:02:28.103 Message: lib/reorder: Defining dependency "reorder" 00:02:28.103 Message: lib/security: Defining dependency "security" 00:02:28.103 Has header "linux/userfaultfd.h" : YES 00:02:28.103 Has header "linux/vduse.h" : YES 00:02:28.103 Message: lib/vhost: Defining dependency "vhost" 00:02:28.103 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.103 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.103 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.103 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.104 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:28.104 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:28.104 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:28.104 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:28.104 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:28.104 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:28.104 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:28.104 Configuring doxy-api-html.conf using configuration 00:02:28.104 Configuring doxy-api-man.conf using configuration 00:02:28.104 Program mandb found: YES (/usr/bin/mandb) 00:02:28.104 Program sphinx-build found: NO 00:02:28.104 Configuring rte_build_config.h using configuration 00:02:28.104 Message: 00:02:28.104 ================= 00:02:28.104 Applications Enabled 00:02:28.104 ================= 00:02:28.104 00:02:28.104 apps: 00:02:28.104 00:02:28.104 00:02:28.104 Message: 00:02:28.104 ================= 00:02:28.104 Libraries Enabled 00:02:28.104 ================= 00:02:28.104 00:02:28.104 libs: 00:02:28.104 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:28.104 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:28.104 cryptodev, dmadev, power, reorder, security, vhost, 00:02:28.104 00:02:28.104 Message: 00:02:28.104 =============== 00:02:28.104 Drivers Enabled 00:02:28.104 =============== 00:02:28.104 00:02:28.104 common: 00:02:28.104 00:02:28.104 bus: 00:02:28.104 pci, vdev, 00:02:28.104 mempool: 00:02:28.104 ring, 00:02:28.104 dma: 00:02:28.104 00:02:28.104 net: 00:02:28.104 00:02:28.104 crypto: 00:02:28.104 00:02:28.104 compress: 00:02:28.104 00:02:28.104 vdpa: 00:02:28.104 00:02:28.104 00:02:28.104 Message: 00:02:28.104 ================= 00:02:28.104 Content Skipped 00:02:28.104 ================= 00:02:28.104 00:02:28.104 apps: 00:02:28.104 dumpcap: explicitly disabled via build config 00:02:28.104 graph: explicitly disabled via build config 00:02:28.104 pdump: explicitly disabled via build config 00:02:28.104 proc-info: explicitly disabled via build config 00:02:28.104 test-acl: explicitly disabled via build config 00:02:28.104 test-bbdev: explicitly disabled via build config 00:02:28.104 test-cmdline: explicitly disabled via build config 00:02:28.104 test-compress-perf: explicitly disabled via build config 00:02:28.104 test-crypto-perf: explicitly disabled via build config 00:02:28.104 test-dma-perf: explicitly disabled via build config 00:02:28.104 test-eventdev: explicitly disabled via build config 00:02:28.104 test-fib: explicitly disabled via build config 00:02:28.104 test-flow-perf: explicitly disabled via build config 00:02:28.104 test-gpudev: explicitly disabled via build config 00:02:28.104 test-mldev: explicitly disabled via build config 00:02:28.104 test-pipeline: explicitly disabled via build config 00:02:28.104 test-pmd: explicitly disabled via build config 00:02:28.104 test-regex: explicitly disabled via build config 00:02:28.104 test-sad: explicitly disabled via build config 00:02:28.104 test-security-perf: explicitly disabled via build config 00:02:28.104 00:02:28.104 libs: 00:02:28.104 argparse: explicitly disabled via build config 00:02:28.104 metrics: explicitly disabled via build config 00:02:28.104 acl: explicitly disabled via build config 00:02:28.104 bbdev: explicitly disabled via build config 00:02:28.104 bitratestats: explicitly disabled via build config 00:02:28.104 bpf: explicitly disabled via build config 00:02:28.104 cfgfile: explicitly disabled via build config 00:02:28.104 distributor: explicitly disabled via build config 00:02:28.104 efd: explicitly disabled via build config 00:02:28.104 eventdev: explicitly disabled via build config 00:02:28.104 dispatcher: explicitly disabled via build config 00:02:28.104 gpudev: explicitly disabled via build config 00:02:28.104 gro: explicitly disabled via build config 00:02:28.104 gso: explicitly disabled via build config 00:02:28.104 ip_frag: explicitly disabled via build config 00:02:28.104 jobstats: explicitly disabled via build config 00:02:28.104 latencystats: explicitly disabled via build config 00:02:28.104 lpm: explicitly disabled via build config 00:02:28.104 member: explicitly disabled via build config 00:02:28.104 pcapng: explicitly disabled via build config 00:02:28.104 rawdev: explicitly disabled via build config 00:02:28.104 regexdev: explicitly disabled via build config 00:02:28.104 mldev: explicitly disabled via build config 00:02:28.104 rib: explicitly disabled via build config 00:02:28.104 sched: explicitly disabled via build config 00:02:28.104 stack: explicitly disabled via build config 00:02:28.104 ipsec: explicitly disabled via build config 00:02:28.104 pdcp: explicitly disabled via build config 00:02:28.104 fib: explicitly disabled via build config 00:02:28.104 port: explicitly disabled via build config 00:02:28.104 pdump: explicitly disabled via build config 00:02:28.104 table: explicitly disabled via build config 00:02:28.104 pipeline: explicitly disabled via build config 00:02:28.104 graph: explicitly disabled via build config 00:02:28.104 node: explicitly disabled via build config 00:02:28.104 00:02:28.104 drivers: 00:02:28.104 common/cpt: not in enabled drivers build config 00:02:28.104 common/dpaax: not in enabled drivers build config 00:02:28.104 common/iavf: not in enabled drivers build config 00:02:28.104 common/idpf: not in enabled drivers build config 00:02:28.104 common/ionic: not in enabled drivers build config 00:02:28.104 common/mvep: not in enabled drivers build config 00:02:28.104 common/octeontx: not in enabled drivers build config 00:02:28.104 bus/auxiliary: not in enabled drivers build config 00:02:28.104 bus/cdx: not in enabled drivers build config 00:02:28.104 bus/dpaa: not in enabled drivers build config 00:02:28.104 bus/fslmc: not in enabled drivers build config 00:02:28.104 bus/ifpga: not in enabled drivers build config 00:02:28.104 bus/platform: not in enabled drivers build config 00:02:28.104 bus/uacce: not in enabled drivers build config 00:02:28.104 bus/vmbus: not in enabled drivers build config 00:02:28.104 common/cnxk: not in enabled drivers build config 00:02:28.104 common/mlx5: not in enabled drivers build config 00:02:28.104 common/nfp: not in enabled drivers build config 00:02:28.104 common/nitrox: not in enabled drivers build config 00:02:28.104 common/qat: not in enabled drivers build config 00:02:28.104 common/sfc_efx: not in enabled drivers build config 00:02:28.104 mempool/bucket: not in enabled drivers build config 00:02:28.104 mempool/cnxk: not in enabled drivers build config 00:02:28.104 mempool/dpaa: not in enabled drivers build config 00:02:28.104 mempool/dpaa2: not in enabled drivers build config 00:02:28.104 mempool/octeontx: not in enabled drivers build config 00:02:28.104 mempool/stack: not in enabled drivers build config 00:02:28.104 dma/cnxk: not in enabled drivers build config 00:02:28.104 dma/dpaa: not in enabled drivers build config 00:02:28.104 dma/dpaa2: not in enabled drivers build config 00:02:28.104 dma/hisilicon: not in enabled drivers build config 00:02:28.104 dma/idxd: not in enabled drivers build config 00:02:28.104 dma/ioat: not in enabled drivers build config 00:02:28.104 dma/skeleton: not in enabled drivers build config 00:02:28.104 net/af_packet: not in enabled drivers build config 00:02:28.104 net/af_xdp: not in enabled drivers build config 00:02:28.104 net/ark: not in enabled drivers build config 00:02:28.104 net/atlantic: not in enabled drivers build config 00:02:28.104 net/avp: not in enabled drivers build config 00:02:28.104 net/axgbe: not in enabled drivers build config 00:02:28.104 net/bnx2x: not in enabled drivers build config 00:02:28.104 net/bnxt: not in enabled drivers build config 00:02:28.104 net/bonding: not in enabled drivers build config 00:02:28.104 net/cnxk: not in enabled drivers build config 00:02:28.104 net/cpfl: not in enabled drivers build config 00:02:28.104 net/cxgbe: not in enabled drivers build config 00:02:28.104 net/dpaa: not in enabled drivers build config 00:02:28.104 net/dpaa2: not in enabled drivers build config 00:02:28.104 net/e1000: not in enabled drivers build config 00:02:28.104 net/ena: not in enabled drivers build config 00:02:28.104 net/enetc: not in enabled drivers build config 00:02:28.104 net/enetfec: not in enabled drivers build config 00:02:28.104 net/enic: not in enabled drivers build config 00:02:28.104 net/failsafe: not in enabled drivers build config 00:02:28.104 net/fm10k: not in enabled drivers build config 00:02:28.104 net/gve: not in enabled drivers build config 00:02:28.104 net/hinic: not in enabled drivers build config 00:02:28.104 net/hns3: not in enabled drivers build config 00:02:28.104 net/i40e: not in enabled drivers build config 00:02:28.104 net/iavf: not in enabled drivers build config 00:02:28.104 net/ice: not in enabled drivers build config 00:02:28.104 net/idpf: not in enabled drivers build config 00:02:28.104 net/igc: not in enabled drivers build config 00:02:28.104 net/ionic: not in enabled drivers build config 00:02:28.104 net/ipn3ke: not in enabled drivers build config 00:02:28.104 net/ixgbe: not in enabled drivers build config 00:02:28.104 net/mana: not in enabled drivers build config 00:02:28.104 net/memif: not in enabled drivers build config 00:02:28.104 net/mlx4: not in enabled drivers build config 00:02:28.104 net/mlx5: not in enabled drivers build config 00:02:28.104 net/mvneta: not in enabled drivers build config 00:02:28.104 net/mvpp2: not in enabled drivers build config 00:02:28.104 net/netvsc: not in enabled drivers build config 00:02:28.104 net/nfb: not in enabled drivers build config 00:02:28.104 net/nfp: not in enabled drivers build config 00:02:28.104 net/ngbe: not in enabled drivers build config 00:02:28.104 net/null: not in enabled drivers build config 00:02:28.104 net/octeontx: not in enabled drivers build config 00:02:28.104 net/octeon_ep: not in enabled drivers build config 00:02:28.104 net/pcap: not in enabled drivers build config 00:02:28.104 net/pfe: not in enabled drivers build config 00:02:28.104 net/qede: not in enabled drivers build config 00:02:28.104 net/ring: not in enabled drivers build config 00:02:28.104 net/sfc: not in enabled drivers build config 00:02:28.104 net/softnic: not in enabled drivers build config 00:02:28.104 net/tap: not in enabled drivers build config 00:02:28.104 net/thunderx: not in enabled drivers build config 00:02:28.104 net/txgbe: not in enabled drivers build config 00:02:28.104 net/vdev_netvsc: not in enabled drivers build config 00:02:28.104 net/vhost: not in enabled drivers build config 00:02:28.104 net/virtio: not in enabled drivers build config 00:02:28.105 net/vmxnet3: not in enabled drivers build config 00:02:28.105 raw/*: missing internal dependency, "rawdev" 00:02:28.105 crypto/armv8: not in enabled drivers build config 00:02:28.105 crypto/bcmfs: not in enabled drivers build config 00:02:28.105 crypto/caam_jr: not in enabled drivers build config 00:02:28.105 crypto/ccp: not in enabled drivers build config 00:02:28.105 crypto/cnxk: not in enabled drivers build config 00:02:28.105 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.105 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.105 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.105 crypto/mlx5: not in enabled drivers build config 00:02:28.105 crypto/mvsam: not in enabled drivers build config 00:02:28.105 crypto/nitrox: not in enabled drivers build config 00:02:28.105 crypto/null: not in enabled drivers build config 00:02:28.105 crypto/octeontx: not in enabled drivers build config 00:02:28.105 crypto/openssl: not in enabled drivers build config 00:02:28.105 crypto/scheduler: not in enabled drivers build config 00:02:28.105 crypto/uadk: not in enabled drivers build config 00:02:28.105 crypto/virtio: not in enabled drivers build config 00:02:28.105 compress/isal: not in enabled drivers build config 00:02:28.105 compress/mlx5: not in enabled drivers build config 00:02:28.105 compress/nitrox: not in enabled drivers build config 00:02:28.105 compress/octeontx: not in enabled drivers build config 00:02:28.105 compress/zlib: not in enabled drivers build config 00:02:28.105 regex/*: missing internal dependency, "regexdev" 00:02:28.105 ml/*: missing internal dependency, "mldev" 00:02:28.105 vdpa/ifc: not in enabled drivers build config 00:02:28.105 vdpa/mlx5: not in enabled drivers build config 00:02:28.105 vdpa/nfp: not in enabled drivers build config 00:02:28.105 vdpa/sfc: not in enabled drivers build config 00:02:28.105 event/*: missing internal dependency, "eventdev" 00:02:28.105 baseband/*: missing internal dependency, "bbdev" 00:02:28.105 gpu/*: missing internal dependency, "gpudev" 00:02:28.105 00:02:28.105 00:02:28.105 Build targets in project: 84 00:02:28.105 00:02:28.105 DPDK 24.03.0 00:02:28.105 00:02:28.105 User defined options 00:02:28.105 buildtype : debug 00:02:28.105 default_library : shared 00:02:28.105 libdir : lib 00:02:28.105 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.105 b_sanitize : address 00:02:28.105 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:28.105 c_link_args : 00:02:28.105 cpu_instruction_set: native 00:02:28.105 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:28.105 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:28.105 enable_docs : false 00:02:28.105 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:28.105 enable_kmods : false 00:02:28.105 max_lcores : 128 00:02:28.105 tests : false 00:02:28.105 00:02:28.105 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.105 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:28.105 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.105 [2/267] Linking static target lib/librte_kvargs.a 00:02:28.105 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:28.105 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:28.105 [5/267] Linking static target lib/librte_log.a 00:02:28.105 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.105 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.105 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.105 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.105 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.105 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.105 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.105 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.105 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.105 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.105 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.105 [17/267] Linking static target lib/librte_telemetry.a 00:02:28.105 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.105 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.105 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.105 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.363 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.363 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.363 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.363 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.363 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.363 [27/267] Linking target lib/librte_log.so.24.1 00:02:28.363 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.363 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.363 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.622 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:28.622 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.622 [33/267] Linking target lib/librte_kvargs.so.24.1 00:02:28.622 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.622 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.622 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:28.622 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.622 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:28.622 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.881 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.881 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.881 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:28.881 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.881 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.881 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.881 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.881 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.139 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.139 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.139 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.139 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.398 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:29.398 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.398 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.398 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.398 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.398 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.398 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.398 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.398 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.655 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.655 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.655 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.914 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.914 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.914 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.914 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.914 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.914 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.914 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.914 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.172 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.172 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.172 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.172 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.430 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.430 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.430 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.430 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.430 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.430 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.430 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.688 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.688 [84/267] Linking static target lib/librte_ring.a 00:02:30.689 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.689 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.689 [87/267] Linking static target lib/librte_eal.a 00:02:30.947 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.947 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.947 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.947 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.947 [92/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.947 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.947 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.947 [95/267] Linking static target lib/librte_mempool.a 00:02:30.947 [96/267] Linking static target lib/librte_rcu.a 00:02:31.206 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.206 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.206 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.206 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.206 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.464 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.464 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.464 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.464 [105/267] Linking static target lib/librte_mbuf.a 00:02:31.464 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.464 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:31.464 [108/267] Linking static target lib/librte_net.a 00:02:31.722 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.722 [110/267] Linking static target lib/librte_meter.a 00:02:31.722 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.722 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.722 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.722 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.981 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.981 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.981 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.981 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.981 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.239 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.239 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.498 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.498 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.498 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.498 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.498 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.498 [127/267] Linking static target lib/librte_pci.a 00:02:32.498 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.756 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.756 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.756 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.756 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.756 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.756 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.756 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.756 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.756 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.756 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.014 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.014 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.014 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.014 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.014 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.014 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:33.014 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.014 [146/267] Linking static target lib/librte_cmdline.a 00:02:33.273 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:33.273 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.273 [149/267] Linking static target lib/librte_timer.a 00:02:33.273 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.273 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.530 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.531 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.531 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.531 [155/267] Linking static target lib/librte_ethdev.a 00:02:33.531 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.531 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.789 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.789 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.789 [160/267] Linking static target lib/librte_hash.a 00:02:33.789 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.789 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:33.789 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:34.048 [164/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.048 [165/267] Linking static target lib/librte_compressdev.a 00:02:34.048 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.048 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.048 [168/267] Linking static target lib/librte_dmadev.a 00:02:34.048 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.306 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.306 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.306 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.306 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.565 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.565 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.565 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.565 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:34.565 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.565 [179/267] Linking static target lib/librte_cryptodev.a 00:02:34.565 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.565 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.565 [182/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.565 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.824 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:34.824 [185/267] Linking static target lib/librte_power.a 00:02:35.082 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.083 [187/267] Linking static target lib/librte_reorder.a 00:02:35.083 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.083 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.083 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.341 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.341 [192/267] Linking static target lib/librte_security.a 00:02:35.341 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.341 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.600 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.600 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.600 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.858 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.858 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.858 [200/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.117 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.117 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.117 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.117 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:36.117 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.375 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.375 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.375 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.375 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.375 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.376 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.634 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.634 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.634 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.634 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:36.634 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.634 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.634 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:36.634 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.634 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.634 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.634 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.634 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.634 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:36.634 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.892 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.824 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.082 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.082 [229/267] Linking target lib/librte_eal.so.24.1 00:02:38.340 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:38.340 [231/267] Linking target lib/librte_ring.so.24.1 00:02:38.340 [232/267] Linking target lib/librte_meter.so.24.1 00:02:38.340 [233/267] Linking target lib/librte_timer.so.24.1 00:02:38.340 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:38.340 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:38.340 [236/267] Linking target lib/librte_pci.so.24.1 00:02:38.340 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:38.340 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:38.598 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:38.598 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:38.598 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:38.598 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:38.598 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:38.598 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:38.598 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:38.598 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:38.598 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:38.598 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:38.856 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:38.856 [250/267] Linking target lib/librte_net.so.24.1 00:02:38.856 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:38.856 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:38.856 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:38.856 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:38.856 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:38.856 [256/267] Linking target lib/librte_security.so.24.1 00:02:38.856 [257/267] Linking target lib/librte_hash.so.24.1 00:02:38.856 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:39.114 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.114 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:39.114 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:39.114 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:39.371 [263/267] Linking target lib/librte_power.so.24.1 00:02:39.938 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.938 [265/267] Linking static target lib/librte_vhost.a 00:02:41.349 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.349 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:41.349 INFO: autodetecting backend as ninja 00:02:41.349 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:56.221 CC lib/ut_mock/mock.o 00:02:56.221 CC lib/log/log.o 00:02:56.221 CC lib/log/log_flags.o 00:02:56.221 CC lib/log/log_deprecated.o 00:02:56.221 CC lib/ut/ut.o 00:02:56.221 LIB libspdk_ut_mock.a 00:02:56.221 LIB libspdk_log.a 00:02:56.221 LIB libspdk_ut.a 00:02:56.221 SO libspdk_ut_mock.so.6.0 00:02:56.221 SO libspdk_log.so.7.1 00:02:56.221 SO libspdk_ut.so.2.0 00:02:56.221 SYMLINK libspdk_ut_mock.so 00:02:56.221 SYMLINK libspdk_ut.so 00:02:56.221 SYMLINK libspdk_log.so 00:02:56.221 CC lib/dma/dma.o 00:02:56.221 CXX lib/trace_parser/trace.o 00:02:56.221 CC lib/util/base64.o 00:02:56.221 CC lib/util/bit_array.o 00:02:56.221 CC lib/ioat/ioat.o 00:02:56.221 CC lib/util/cpuset.o 00:02:56.221 CC lib/util/crc16.o 00:02:56.221 CC lib/util/crc32.o 00:02:56.221 CC lib/util/crc32c.o 00:02:56.221 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.221 CC lib/util/crc32_ieee.o 00:02:56.221 CC lib/vfio_user/host/vfio_user.o 00:02:56.221 CC lib/util/crc64.o 00:02:56.221 CC lib/util/dif.o 00:02:56.221 CC lib/util/fd.o 00:02:56.221 LIB libspdk_dma.a 00:02:56.221 CC lib/util/fd_group.o 00:02:56.221 CC lib/util/file.o 00:02:56.221 SO libspdk_dma.so.5.0 00:02:56.221 CC lib/util/hexlify.o 00:02:56.221 SYMLINK libspdk_dma.so 00:02:56.221 CC lib/util/iov.o 00:02:56.221 LIB libspdk_ioat.a 00:02:56.221 CC lib/util/math.o 00:02:56.221 SO libspdk_ioat.so.7.0 00:02:56.221 CC lib/util/net.o 00:02:56.221 LIB libspdk_vfio_user.a 00:02:56.221 CC lib/util/pipe.o 00:02:56.221 SYMLINK libspdk_ioat.so 00:02:56.221 CC lib/util/strerror_tls.o 00:02:56.221 CC lib/util/string.o 00:02:56.221 SO libspdk_vfio_user.so.5.0 00:02:56.221 CC lib/util/uuid.o 00:02:56.221 SYMLINK libspdk_vfio_user.so 00:02:56.221 CC lib/util/xor.o 00:02:56.221 CC lib/util/zipf.o 00:02:56.221 CC lib/util/md5.o 00:02:56.221 LIB libspdk_trace_parser.a 00:02:56.221 LIB libspdk_util.a 00:02:56.221 SO libspdk_trace_parser.so.6.0 00:02:56.478 SO libspdk_util.so.10.0 00:02:56.478 SYMLINK libspdk_trace_parser.so 00:02:56.478 SYMLINK libspdk_util.so 00:02:56.792 CC lib/rdma_provider/common.o 00:02:56.792 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.792 CC lib/rdma_utils/rdma_utils.o 00:02:56.792 CC lib/idxd/idxd.o 00:02:56.792 CC lib/idxd/idxd_kernel.o 00:02:56.792 CC lib/idxd/idxd_user.o 00:02:56.792 CC lib/conf/conf.o 00:02:56.792 CC lib/vmd/vmd.o 00:02:56.792 CC lib/env_dpdk/env.o 00:02:56.792 CC lib/json/json_parse.o 00:02:56.792 CC lib/vmd/led.o 00:02:56.792 CC lib/env_dpdk/memory.o 00:02:56.792 LIB libspdk_rdma_provider.a 00:02:56.792 CC lib/json/json_util.o 00:02:56.792 SO libspdk_rdma_provider.so.6.0 00:02:56.792 LIB libspdk_conf.a 00:02:56.792 CC lib/env_dpdk/pci.o 00:02:56.792 SO libspdk_conf.so.6.0 00:02:56.792 SYMLINK libspdk_rdma_provider.so 00:02:56.792 LIB libspdk_rdma_utils.a 00:02:56.792 CC lib/env_dpdk/init.o 00:02:56.792 SO libspdk_rdma_utils.so.1.0 00:02:56.792 CC lib/env_dpdk/threads.o 00:02:57.048 SYMLINK libspdk_conf.so 00:02:57.048 CC lib/env_dpdk/pci_ioat.o 00:02:57.048 SYMLINK libspdk_rdma_utils.so 00:02:57.048 CC lib/json/json_write.o 00:02:57.048 CC lib/env_dpdk/pci_virtio.o 00:02:57.048 CC lib/env_dpdk/pci_vmd.o 00:02:57.048 CC lib/env_dpdk/pci_idxd.o 00:02:57.048 CC lib/env_dpdk/pci_event.o 00:02:57.048 CC lib/env_dpdk/sigbus_handler.o 00:02:57.305 LIB libspdk_json.a 00:02:57.305 LIB libspdk_vmd.a 00:02:57.305 SO libspdk_json.so.6.0 00:02:57.305 SO libspdk_vmd.so.6.0 00:02:57.305 CC lib/env_dpdk/pci_dpdk.o 00:02:57.305 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.305 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.305 SYMLINK libspdk_json.so 00:02:57.305 SYMLINK libspdk_vmd.so 00:02:57.305 LIB libspdk_idxd.a 00:02:57.305 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.305 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.305 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.305 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.305 SO libspdk_idxd.so.12.1 00:02:57.562 SYMLINK libspdk_idxd.so 00:02:57.562 LIB libspdk_jsonrpc.a 00:02:57.562 SO libspdk_jsonrpc.so.6.0 00:02:57.818 SYMLINK libspdk_jsonrpc.so 00:02:58.076 CC lib/rpc/rpc.o 00:02:58.076 LIB libspdk_env_dpdk.a 00:02:58.076 LIB libspdk_rpc.a 00:02:58.076 SO libspdk_env_dpdk.so.15.1 00:02:58.076 SO libspdk_rpc.so.6.0 00:02:58.334 SYMLINK libspdk_rpc.so 00:02:58.334 SYMLINK libspdk_env_dpdk.so 00:02:58.334 CC lib/trace/trace.o 00:02:58.334 CC lib/trace/trace_rpc.o 00:02:58.334 CC lib/trace/trace_flags.o 00:02:58.334 CC lib/notify/notify.o 00:02:58.334 CC lib/notify/notify_rpc.o 00:02:58.334 CC lib/keyring/keyring.o 00:02:58.334 CC lib/keyring/keyring_rpc.o 00:02:58.593 LIB libspdk_notify.a 00:02:58.593 SO libspdk_notify.so.6.0 00:02:58.593 LIB libspdk_keyring.a 00:02:58.593 SO libspdk_keyring.so.2.0 00:02:58.593 LIB libspdk_trace.a 00:02:58.593 SYMLINK libspdk_notify.so 00:02:58.593 SO libspdk_trace.so.11.0 00:02:58.593 SYMLINK libspdk_keyring.so 00:02:58.851 SYMLINK libspdk_trace.so 00:02:58.851 CC lib/thread/iobuf.o 00:02:58.851 CC lib/thread/thread.o 00:02:58.851 CC lib/sock/sock.o 00:02:58.851 CC lib/sock/sock_rpc.o 00:02:59.418 LIB libspdk_sock.a 00:02:59.418 SO libspdk_sock.so.10.0 00:02:59.418 SYMLINK libspdk_sock.so 00:02:59.676 CC lib/nvme/nvme_fabric.o 00:02:59.676 CC lib/nvme/nvme_ctrlr.o 00:02:59.676 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.676 CC lib/nvme/nvme_ns.o 00:02:59.676 CC lib/nvme/nvme_ns_cmd.o 00:02:59.676 CC lib/nvme/nvme_pcie_common.o 00:02:59.676 CC lib/nvme/nvme_pcie.o 00:02:59.676 CC lib/nvme/nvme_qpair.o 00:02:59.676 CC lib/nvme/nvme.o 00:03:00.241 LIB libspdk_thread.a 00:03:00.241 CC lib/nvme/nvme_quirks.o 00:03:00.241 SO libspdk_thread.so.11.0 00:03:00.241 CC lib/nvme/nvme_transport.o 00:03:00.241 CC lib/nvme/nvme_discovery.o 00:03:00.241 SYMLINK libspdk_thread.so 00:03:00.241 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:00.241 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:00.500 CC lib/nvme/nvme_tcp.o 00:03:00.500 CC lib/nvme/nvme_opal.o 00:03:00.500 CC lib/nvme/nvme_io_msg.o 00:03:00.758 CC lib/nvme/nvme_poll_group.o 00:03:00.758 CC lib/nvme/nvme_zns.o 00:03:00.758 CC lib/nvme/nvme_stubs.o 00:03:00.758 CC lib/nvme/nvme_auth.o 00:03:00.758 CC lib/nvme/nvme_cuse.o 00:03:01.015 CC lib/nvme/nvme_rdma.o 00:03:01.015 CC lib/accel/accel.o 00:03:01.015 CC lib/accel/accel_rpc.o 00:03:01.015 CC lib/accel/accel_sw.o 00:03:01.274 CC lib/blob/blobstore.o 00:03:01.531 CC lib/virtio/virtio.o 00:03:01.531 CC lib/virtio/virtio_vhost_user.o 00:03:01.531 CC lib/init/json_config.o 00:03:01.531 CC lib/blob/request.o 00:03:01.531 CC lib/init/subsystem.o 00:03:01.789 CC lib/init/subsystem_rpc.o 00:03:01.789 CC lib/init/rpc.o 00:03:01.789 CC lib/virtio/virtio_vfio_user.o 00:03:01.789 CC lib/virtio/virtio_pci.o 00:03:01.789 CC lib/fsdev/fsdev.o 00:03:01.789 CC lib/fsdev/fsdev_io.o 00:03:01.789 LIB libspdk_init.a 00:03:01.789 CC lib/blob/zeroes.o 00:03:01.789 SO libspdk_init.so.6.0 00:03:01.789 CC lib/blob/blob_bs_dev.o 00:03:02.046 SYMLINK libspdk_init.so 00:03:02.046 CC lib/fsdev/fsdev_rpc.o 00:03:02.046 LIB libspdk_accel.a 00:03:02.046 SO libspdk_accel.so.16.0 00:03:02.046 SYMLINK libspdk_accel.so 00:03:02.046 CC lib/event/app.o 00:03:02.046 CC lib/event/reactor.o 00:03:02.046 LIB libspdk_virtio.a 00:03:02.046 CC lib/event/log_rpc.o 00:03:02.046 CC lib/event/app_rpc.o 00:03:02.046 SO libspdk_virtio.so.7.0 00:03:02.304 CC lib/bdev/bdev.o 00:03:02.304 SYMLINK libspdk_virtio.so 00:03:02.304 CC lib/event/scheduler_static.o 00:03:02.304 CC lib/bdev/bdev_rpc.o 00:03:02.304 CC lib/bdev/bdev_zone.o 00:03:02.304 CC lib/bdev/part.o 00:03:02.304 CC lib/bdev/scsi_nvme.o 00:03:02.304 LIB libspdk_fsdev.a 00:03:02.304 LIB libspdk_nvme.a 00:03:02.562 SO libspdk_fsdev.so.2.0 00:03:02.562 SYMLINK libspdk_fsdev.so 00:03:02.562 LIB libspdk_event.a 00:03:02.562 SO libspdk_nvme.so.14.1 00:03:02.562 SO libspdk_event.so.14.0 00:03:02.562 SYMLINK libspdk_event.so 00:03:02.562 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:02.820 SYMLINK libspdk_nvme.so 00:03:03.385 LIB libspdk_fuse_dispatcher.a 00:03:03.385 SO libspdk_fuse_dispatcher.so.1.0 00:03:03.385 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.951 LIB libspdk_blob.a 00:03:04.210 SO libspdk_blob.so.11.0 00:03:04.210 SYMLINK libspdk_blob.so 00:03:04.469 CC lib/blobfs/blobfs.o 00:03:04.469 CC lib/blobfs/tree.o 00:03:04.469 CC lib/lvol/lvol.o 00:03:05.035 LIB libspdk_bdev.a 00:03:05.035 SO libspdk_bdev.so.17.0 00:03:05.035 SYMLINK libspdk_bdev.so 00:03:05.035 LIB libspdk_blobfs.a 00:03:05.293 SO libspdk_blobfs.so.10.0 00:03:05.293 SYMLINK libspdk_blobfs.so 00:03:05.293 CC lib/scsi/dev.o 00:03:05.293 CC lib/ublk/ublk_rpc.o 00:03:05.293 CC lib/scsi/lun.o 00:03:05.293 CC lib/ublk/ublk.o 00:03:05.293 CC lib/scsi/port.o 00:03:05.293 CC lib/scsi/scsi.o 00:03:05.293 CC lib/nvmf/ctrlr.o 00:03:05.293 CC lib/ftl/ftl_core.o 00:03:05.293 CC lib/nbd/nbd.o 00:03:05.293 CC lib/nbd/nbd_rpc.o 00:03:05.293 CC lib/ftl/ftl_init.o 00:03:05.551 LIB libspdk_lvol.a 00:03:05.551 CC lib/ftl/ftl_layout.o 00:03:05.551 SO libspdk_lvol.so.10.0 00:03:05.551 CC lib/ftl/ftl_debug.o 00:03:05.551 SYMLINK libspdk_lvol.so 00:03:05.551 CC lib/ftl/ftl_io.o 00:03:05.551 CC lib/ftl/ftl_sb.o 00:03:05.551 CC lib/scsi/scsi_bdev.o 00:03:05.551 CC lib/ftl/ftl_l2p.o 00:03:05.551 LIB libspdk_nbd.a 00:03:05.551 SO libspdk_nbd.so.7.0 00:03:05.551 CC lib/ftl/ftl_l2p_flat.o 00:03:05.809 SYMLINK libspdk_nbd.so 00:03:05.809 CC lib/ftl/ftl_nv_cache.o 00:03:05.809 CC lib/nvmf/ctrlr_discovery.o 00:03:05.809 CC lib/nvmf/ctrlr_bdev.o 00:03:05.809 CC lib/nvmf/subsystem.o 00:03:05.809 CC lib/nvmf/nvmf.o 00:03:05.809 CC lib/ftl/ftl_band.o 00:03:05.809 CC lib/nvmf/nvmf_rpc.o 00:03:05.809 LIB libspdk_ublk.a 00:03:06.078 SO libspdk_ublk.so.3.0 00:03:06.078 SYMLINK libspdk_ublk.so 00:03:06.078 CC lib/nvmf/transport.o 00:03:06.078 CC lib/scsi/scsi_pr.o 00:03:06.078 CC lib/ftl/ftl_band_ops.o 00:03:06.351 CC lib/ftl/ftl_writer.o 00:03:06.351 CC lib/scsi/scsi_rpc.o 00:03:06.351 CC lib/scsi/task.o 00:03:06.351 CC lib/nvmf/tcp.o 00:03:06.609 CC lib/ftl/ftl_rq.o 00:03:06.609 CC lib/nvmf/stubs.o 00:03:06.609 CC lib/nvmf/mdns_server.o 00:03:06.609 CC lib/nvmf/rdma.o 00:03:06.609 LIB libspdk_scsi.a 00:03:06.609 CC lib/ftl/ftl_reloc.o 00:03:06.609 SO libspdk_scsi.so.9.0 00:03:06.609 CC lib/ftl/ftl_l2p_cache.o 00:03:06.609 CC lib/ftl/ftl_p2l.o 00:03:06.609 CC lib/ftl/ftl_p2l_log.o 00:03:06.867 SYMLINK libspdk_scsi.so 00:03:06.867 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.867 CC lib/nvmf/auth.o 00:03:06.867 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:07.125 CC lib/iscsi/conn.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.125 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.383 CC lib/vhost/vhost.o 00:03:07.383 CC lib/vhost/vhost_rpc.o 00:03:07.383 CC lib/vhost/vhost_scsi.o 00:03:07.383 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.383 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.641 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.641 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.641 CC lib/ftl/utils/ftl_conf.o 00:03:07.641 CC lib/ftl/utils/ftl_md.o 00:03:07.641 CC lib/iscsi/init_grp.o 00:03:07.641 CC lib/ftl/utils/ftl_mempool.o 00:03:07.641 CC lib/vhost/vhost_blk.o 00:03:07.900 CC lib/vhost/rte_vhost_user.o 00:03:07.900 CC lib/iscsi/iscsi.o 00:03:07.900 CC lib/iscsi/param.o 00:03:07.900 CC lib/ftl/utils/ftl_bitmap.o 00:03:07.900 CC lib/iscsi/portal_grp.o 00:03:07.900 CC lib/iscsi/tgt_node.o 00:03:07.900 CC lib/iscsi/iscsi_subsystem.o 00:03:07.900 CC lib/iscsi/iscsi_rpc.o 00:03:07.900 CC lib/ftl/utils/ftl_property.o 00:03:08.158 CC lib/iscsi/task.o 00:03:08.158 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.158 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.158 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.416 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.416 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.416 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.416 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.416 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.416 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.416 CC lib/ftl/base/ftl_base_dev.o 00:03:08.416 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.674 CC lib/ftl/ftl_trace.o 00:03:08.674 LIB libspdk_vhost.a 00:03:08.674 SO libspdk_vhost.so.8.0 00:03:08.674 SYMLINK libspdk_vhost.so 00:03:08.674 LIB libspdk_ftl.a 00:03:08.674 LIB libspdk_nvmf.a 00:03:08.932 SO libspdk_nvmf.so.20.0 00:03:08.932 SO libspdk_ftl.so.9.0 00:03:08.932 LIB libspdk_iscsi.a 00:03:08.932 SO libspdk_iscsi.so.8.0 00:03:09.191 SYMLINK libspdk_nvmf.so 00:03:09.191 SYMLINK libspdk_ftl.so 00:03:09.191 SYMLINK libspdk_iscsi.so 00:03:09.450 CC module/env_dpdk/env_dpdk_rpc.o 00:03:09.450 CC module/keyring/file/keyring.o 00:03:09.450 CC module/sock/posix/posix.o 00:03:09.450 CC module/blob/bdev/blob_bdev.o 00:03:09.450 CC module/keyring/linux/keyring.o 00:03:09.450 CC module/accel/error/accel_error.o 00:03:09.450 CC module/accel/dsa/accel_dsa.o 00:03:09.450 CC module/fsdev/aio/fsdev_aio.o 00:03:09.450 CC module/accel/ioat/accel_ioat.o 00:03:09.450 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:09.450 LIB libspdk_env_dpdk_rpc.a 00:03:09.708 SO libspdk_env_dpdk_rpc.so.6.0 00:03:09.708 CC module/keyring/file/keyring_rpc.o 00:03:09.708 CC module/keyring/linux/keyring_rpc.o 00:03:09.708 SYMLINK libspdk_env_dpdk_rpc.so 00:03:09.708 CC module/accel/ioat/accel_ioat_rpc.o 00:03:09.708 LIB libspdk_scheduler_dynamic.a 00:03:09.708 CC module/accel/error/accel_error_rpc.o 00:03:09.708 SO libspdk_scheduler_dynamic.so.4.0 00:03:09.708 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.708 LIB libspdk_keyring_file.a 00:03:09.708 LIB libspdk_blob_bdev.a 00:03:09.708 SYMLINK libspdk_scheduler_dynamic.so 00:03:09.708 SO libspdk_keyring_file.so.2.0 00:03:09.708 LIB libspdk_keyring_linux.a 00:03:09.708 SO libspdk_blob_bdev.so.11.0 00:03:09.708 SO libspdk_keyring_linux.so.1.0 00:03:09.708 LIB libspdk_accel_error.a 00:03:09.708 LIB libspdk_accel_ioat.a 00:03:09.708 SYMLINK libspdk_keyring_file.so 00:03:09.708 SYMLINK libspdk_keyring_linux.so 00:03:09.708 SYMLINK libspdk_blob_bdev.so 00:03:09.708 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:09.708 SO libspdk_accel_ioat.so.6.0 00:03:09.708 SO libspdk_accel_error.so.2.0 00:03:09.708 LIB libspdk_accel_dsa.a 00:03:09.708 SO libspdk_accel_dsa.so.5.0 00:03:09.967 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:09.967 SYMLINK libspdk_accel_error.so 00:03:09.967 SYMLINK libspdk_accel_ioat.so 00:03:09.967 SYMLINK libspdk_accel_dsa.so 00:03:09.967 CC module/fsdev/aio/linux_aio_mgr.o 00:03:09.967 CC module/accel/iaa/accel_iaa.o 00:03:09.967 CC module/scheduler/gscheduler/gscheduler.o 00:03:09.967 LIB libspdk_scheduler_dpdk_governor.a 00:03:09.967 CC module/bdev/delay/vbdev_delay.o 00:03:09.967 CC module/bdev/error/vbdev_error.o 00:03:09.967 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:09.967 CC module/bdev/gpt/gpt.o 00:03:09.967 LIB libspdk_scheduler_gscheduler.a 00:03:09.967 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.967 SO libspdk_scheduler_gscheduler.so.4.0 00:03:09.967 LIB libspdk_fsdev_aio.a 00:03:09.967 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.225 LIB libspdk_sock_posix.a 00:03:10.225 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.225 SO libspdk_fsdev_aio.so.1.0 00:03:10.225 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.225 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.225 SO libspdk_sock_posix.so.6.0 00:03:10.225 CC module/bdev/gpt/vbdev_gpt.o 00:03:10.225 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.225 SYMLINK libspdk_sock_posix.so 00:03:10.225 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:10.225 SYMLINK libspdk_fsdev_aio.so 00:03:10.225 LIB libspdk_accel_iaa.a 00:03:10.225 LIB libspdk_bdev_error.a 00:03:10.225 SO libspdk_accel_iaa.so.3.0 00:03:10.225 SO libspdk_bdev_error.so.6.0 00:03:10.225 SYMLINK libspdk_accel_iaa.so 00:03:10.225 LIB libspdk_bdev_delay.a 00:03:10.225 CC module/bdev/lvol/vbdev_lvol.o 00:03:10.225 CC module/bdev/malloc/bdev_malloc.o 00:03:10.225 SO libspdk_bdev_delay.so.6.0 00:03:10.225 SYMLINK libspdk_bdev_error.so 00:03:10.225 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:10.225 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:10.225 LIB libspdk_blobfs_bdev.a 00:03:10.483 CC module/bdev/null/bdev_null.o 00:03:10.483 SO libspdk_blobfs_bdev.so.6.0 00:03:10.483 SYMLINK libspdk_bdev_delay.so 00:03:10.483 LIB libspdk_bdev_gpt.a 00:03:10.483 CC module/bdev/nvme/bdev_nvme.o 00:03:10.483 SYMLINK libspdk_blobfs_bdev.so 00:03:10.483 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.483 SO libspdk_bdev_gpt.so.6.0 00:03:10.483 CC module/bdev/passthru/vbdev_passthru.o 00:03:10.483 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.483 SYMLINK libspdk_bdev_gpt.so 00:03:10.483 CC module/bdev/null/bdev_null_rpc.o 00:03:10.483 CC module/bdev/raid/bdev_raid.o 00:03:10.483 LIB libspdk_bdev_malloc.a 00:03:10.483 SO libspdk_bdev_malloc.so.6.0 00:03:10.742 LIB libspdk_bdev_null.a 00:03:10.742 LIB libspdk_bdev_passthru.a 00:03:10.742 SYMLINK libspdk_bdev_malloc.so 00:03:10.742 SO libspdk_bdev_null.so.6.0 00:03:10.742 SO libspdk_bdev_passthru.so.6.0 00:03:10.742 LIB libspdk_bdev_lvol.a 00:03:10.742 SYMLINK libspdk_bdev_null.so 00:03:10.742 SYMLINK libspdk_bdev_passthru.so 00:03:10.742 SO libspdk_bdev_lvol.so.6.0 00:03:10.742 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:10.742 CC module/bdev/split/vbdev_split.o 00:03:10.742 CC module/bdev/xnvme/bdev_xnvme.o 00:03:10.742 SYMLINK libspdk_bdev_lvol.so 00:03:10.742 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.742 CC module/bdev/aio/bdev_aio.o 00:03:10.742 CC module/bdev/ftl/bdev_ftl.o 00:03:10.742 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.000 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:11.000 LIB libspdk_bdev_split.a 00:03:11.000 SO libspdk_bdev_split.so.6.0 00:03:11.000 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:11.000 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:11.000 SYMLINK libspdk_bdev_split.so 00:03:11.000 CC module/bdev/aio/bdev_aio_rpc.o 00:03:11.000 CC module/bdev/nvme/nvme_rpc.o 00:03:11.000 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.000 LIB libspdk_bdev_xnvme.a 00:03:11.000 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:11.000 LIB libspdk_bdev_aio.a 00:03:11.000 LIB libspdk_bdev_zone_block.a 00:03:11.259 SO libspdk_bdev_xnvme.so.3.0 00:03:11.259 SO libspdk_bdev_aio.so.6.0 00:03:11.259 SO libspdk_bdev_zone_block.so.6.0 00:03:11.259 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.259 SYMLINK libspdk_bdev_xnvme.so 00:03:11.259 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.259 SYMLINK libspdk_bdev_aio.so 00:03:11.259 SYMLINK libspdk_bdev_zone_block.so 00:03:11.259 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:11.259 CC module/bdev/nvme/vbdev_opal.o 00:03:11.259 LIB libspdk_bdev_iscsi.a 00:03:11.259 SO libspdk_bdev_iscsi.so.6.0 00:03:11.259 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.259 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:11.259 SYMLINK libspdk_bdev_iscsi.so 00:03:11.259 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.259 CC module/bdev/raid/raid0.o 00:03:11.259 LIB libspdk_bdev_ftl.a 00:03:11.259 CC module/bdev/raid/raid1.o 00:03:11.518 SO libspdk_bdev_ftl.so.6.0 00:03:11.518 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.518 CC module/bdev/raid/concat.o 00:03:11.518 SYMLINK libspdk_bdev_ftl.so 00:03:11.518 LIB libspdk_bdev_virtio.a 00:03:11.518 SO libspdk_bdev_virtio.so.6.0 00:03:11.776 SYMLINK libspdk_bdev_virtio.so 00:03:11.776 LIB libspdk_bdev_raid.a 00:03:11.776 SO libspdk_bdev_raid.so.6.0 00:03:11.776 SYMLINK libspdk_bdev_raid.so 00:03:13.151 LIB libspdk_bdev_nvme.a 00:03:13.151 SO libspdk_bdev_nvme.so.7.1 00:03:13.151 SYMLINK libspdk_bdev_nvme.so 00:03:13.410 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.410 CC module/event/subsystems/vmd/vmd.o 00:03:13.410 CC module/event/subsystems/sock/sock.o 00:03:13.410 CC module/event/subsystems/keyring/keyring.o 00:03:13.410 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.410 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.410 CC module/event/subsystems/fsdev/fsdev.o 00:03:13.410 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.410 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.410 LIB libspdk_event_iobuf.a 00:03:13.410 LIB libspdk_event_vhost_blk.a 00:03:13.410 LIB libspdk_event_fsdev.a 00:03:13.410 LIB libspdk_event_vmd.a 00:03:13.410 LIB libspdk_event_keyring.a 00:03:13.410 LIB libspdk_event_scheduler.a 00:03:13.410 LIB libspdk_event_sock.a 00:03:13.668 SO libspdk_event_vhost_blk.so.3.0 00:03:13.668 SO libspdk_event_fsdev.so.1.0 00:03:13.668 SO libspdk_event_keyring.so.1.0 00:03:13.668 SO libspdk_event_vmd.so.6.0 00:03:13.668 SO libspdk_event_iobuf.so.3.0 00:03:13.668 SO libspdk_event_sock.so.5.0 00:03:13.668 SO libspdk_event_scheduler.so.4.0 00:03:13.668 SYMLINK libspdk_event_vhost_blk.so 00:03:13.668 SYMLINK libspdk_event_fsdev.so 00:03:13.668 SYMLINK libspdk_event_scheduler.so 00:03:13.668 SYMLINK libspdk_event_vmd.so 00:03:13.668 SYMLINK libspdk_event_keyring.so 00:03:13.668 SYMLINK libspdk_event_sock.so 00:03:13.668 SYMLINK libspdk_event_iobuf.so 00:03:13.926 CC module/event/subsystems/accel/accel.o 00:03:13.926 LIB libspdk_event_accel.a 00:03:13.926 SO libspdk_event_accel.so.6.0 00:03:13.926 SYMLINK libspdk_event_accel.so 00:03:14.184 CC module/event/subsystems/bdev/bdev.o 00:03:14.443 LIB libspdk_event_bdev.a 00:03:14.443 SO libspdk_event_bdev.so.6.0 00:03:14.443 SYMLINK libspdk_event_bdev.so 00:03:14.701 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.701 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.701 CC module/event/subsystems/nbd/nbd.o 00:03:14.701 CC module/event/subsystems/scsi/scsi.o 00:03:14.701 CC module/event/subsystems/ublk/ublk.o 00:03:14.701 LIB libspdk_event_nbd.a 00:03:14.701 LIB libspdk_event_ublk.a 00:03:14.701 LIB libspdk_event_scsi.a 00:03:14.701 SO libspdk_event_ublk.so.3.0 00:03:14.701 SO libspdk_event_nbd.so.6.0 00:03:14.701 SO libspdk_event_scsi.so.6.0 00:03:14.701 LIB libspdk_event_nvmf.a 00:03:14.701 SYMLINK libspdk_event_ublk.so 00:03:14.960 SYMLINK libspdk_event_nbd.so 00:03:14.960 SYMLINK libspdk_event_scsi.so 00:03:14.960 SO libspdk_event_nvmf.so.6.0 00:03:14.960 SYMLINK libspdk_event_nvmf.so 00:03:14.960 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.960 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.219 LIB libspdk_event_vhost_scsi.a 00:03:15.219 SO libspdk_event_vhost_scsi.so.3.0 00:03:15.219 LIB libspdk_event_iscsi.a 00:03:15.219 SO libspdk_event_iscsi.so.6.0 00:03:15.219 SYMLINK libspdk_event_vhost_scsi.so 00:03:15.219 SYMLINK libspdk_event_iscsi.so 00:03:15.477 SO libspdk.so.6.0 00:03:15.477 SYMLINK libspdk.so 00:03:15.477 CXX app/trace/trace.o 00:03:15.477 CC app/trace_record/trace_record.o 00:03:15.477 CC app/spdk_lspci/spdk_lspci.o 00:03:15.477 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:15.736 CC app/spdk_tgt/spdk_tgt.o 00:03:15.736 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.736 CC examples/ioat/perf/perf.o 00:03:15.736 CC app/nvmf_tgt/nvmf_main.o 00:03:15.736 CC examples/util/zipf/zipf.o 00:03:15.736 CC test/thread/poller_perf/poller_perf.o 00:03:15.736 LINK spdk_lspci 00:03:15.736 LINK zipf 00:03:15.736 LINK interrupt_tgt 00:03:15.736 LINK spdk_trace_record 00:03:15.736 LINK poller_perf 00:03:15.736 LINK nvmf_tgt 00:03:15.736 LINK spdk_tgt 00:03:15.736 LINK iscsi_tgt 00:03:15.736 LINK ioat_perf 00:03:15.736 LINK spdk_trace 00:03:15.994 CC app/spdk_nvme_perf/perf.o 00:03:15.994 CC app/spdk_nvme_identify/identify.o 00:03:15.994 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.994 CC app/spdk_top/spdk_top.o 00:03:15.994 TEST_HEADER include/spdk/accel.h 00:03:15.994 TEST_HEADER include/spdk/accel_module.h 00:03:15.994 TEST_HEADER include/spdk/assert.h 00:03:15.994 TEST_HEADER include/spdk/barrier.h 00:03:15.994 TEST_HEADER include/spdk/base64.h 00:03:15.994 TEST_HEADER include/spdk/bdev.h 00:03:15.994 TEST_HEADER include/spdk/bdev_module.h 00:03:15.994 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.994 TEST_HEADER include/spdk/bit_array.h 00:03:15.994 TEST_HEADER include/spdk/bit_pool.h 00:03:15.994 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.994 CC examples/ioat/verify/verify.o 00:03:15.994 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.994 TEST_HEADER include/spdk/blobfs.h 00:03:15.994 TEST_HEADER include/spdk/blob.h 00:03:15.994 TEST_HEADER include/spdk/conf.h 00:03:15.994 TEST_HEADER include/spdk/config.h 00:03:15.994 TEST_HEADER include/spdk/cpuset.h 00:03:15.994 TEST_HEADER include/spdk/crc16.h 00:03:15.994 TEST_HEADER include/spdk/crc32.h 00:03:15.994 TEST_HEADER include/spdk/crc64.h 00:03:15.994 TEST_HEADER include/spdk/dif.h 00:03:15.994 TEST_HEADER include/spdk/dma.h 00:03:15.994 TEST_HEADER include/spdk/endian.h 00:03:15.994 CC app/spdk_dd/spdk_dd.o 00:03:15.994 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.994 TEST_HEADER include/spdk/env.h 00:03:15.994 TEST_HEADER include/spdk/event.h 00:03:15.994 TEST_HEADER include/spdk/fd_group.h 00:03:15.994 TEST_HEADER include/spdk/fd.h 00:03:15.994 TEST_HEADER include/spdk/file.h 00:03:15.994 TEST_HEADER include/spdk/fsdev.h 00:03:15.994 CC test/dma/test_dma/test_dma.o 00:03:15.994 TEST_HEADER include/spdk/fsdev_module.h 00:03:15.994 TEST_HEADER include/spdk/ftl.h 00:03:15.994 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:15.994 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.994 TEST_HEADER include/spdk/hexlify.h 00:03:15.994 TEST_HEADER include/spdk/histogram_data.h 00:03:15.994 TEST_HEADER include/spdk/idxd.h 00:03:15.994 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.994 TEST_HEADER include/spdk/init.h 00:03:15.994 TEST_HEADER include/spdk/ioat.h 00:03:15.994 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.994 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.994 TEST_HEADER include/spdk/json.h 00:03:15.994 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.994 TEST_HEADER include/spdk/keyring.h 00:03:15.994 TEST_HEADER include/spdk/keyring_module.h 00:03:15.994 TEST_HEADER include/spdk/likely.h 00:03:15.994 CC test/app/bdev_svc/bdev_svc.o 00:03:15.994 TEST_HEADER include/spdk/log.h 00:03:15.994 TEST_HEADER include/spdk/lvol.h 00:03:15.994 TEST_HEADER include/spdk/md5.h 00:03:15.994 TEST_HEADER include/spdk/memory.h 00:03:15.994 LINK spdk_nvme_discover 00:03:15.994 TEST_HEADER include/spdk/mmio.h 00:03:15.994 TEST_HEADER include/spdk/nbd.h 00:03:15.994 CC app/fio/nvme/fio_plugin.o 00:03:15.994 TEST_HEADER include/spdk/net.h 00:03:15.994 TEST_HEADER include/spdk/notify.h 00:03:15.995 TEST_HEADER include/spdk/nvme.h 00:03:15.995 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.995 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.995 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.995 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.995 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.995 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.995 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.995 TEST_HEADER include/spdk/nvmf.h 00:03:15.995 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.995 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.995 TEST_HEADER include/spdk/opal.h 00:03:16.253 TEST_HEADER include/spdk/opal_spec.h 00:03:16.253 TEST_HEADER include/spdk/pci_ids.h 00:03:16.253 TEST_HEADER include/spdk/pipe.h 00:03:16.253 TEST_HEADER include/spdk/queue.h 00:03:16.253 TEST_HEADER include/spdk/reduce.h 00:03:16.253 TEST_HEADER include/spdk/rpc.h 00:03:16.253 TEST_HEADER include/spdk/scheduler.h 00:03:16.253 TEST_HEADER include/spdk/scsi.h 00:03:16.253 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.253 TEST_HEADER include/spdk/sock.h 00:03:16.253 TEST_HEADER include/spdk/stdinc.h 00:03:16.253 TEST_HEADER include/spdk/string.h 00:03:16.253 TEST_HEADER include/spdk/thread.h 00:03:16.253 TEST_HEADER include/spdk/trace.h 00:03:16.253 TEST_HEADER include/spdk/trace_parser.h 00:03:16.253 TEST_HEADER include/spdk/tree.h 00:03:16.253 TEST_HEADER include/spdk/ublk.h 00:03:16.253 TEST_HEADER include/spdk/util.h 00:03:16.253 TEST_HEADER include/spdk/uuid.h 00:03:16.253 TEST_HEADER include/spdk/version.h 00:03:16.253 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.253 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.253 TEST_HEADER include/spdk/vhost.h 00:03:16.253 TEST_HEADER include/spdk/vmd.h 00:03:16.253 TEST_HEADER include/spdk/xor.h 00:03:16.253 TEST_HEADER include/spdk/zipf.h 00:03:16.253 CXX test/cpp_headers/accel.o 00:03:16.253 LINK verify 00:03:16.253 CXX test/cpp_headers/accel_module.o 00:03:16.253 LINK bdev_svc 00:03:16.253 CXX test/cpp_headers/assert.o 00:03:16.253 LINK spdk_dd 00:03:16.512 CC examples/sock/hello_world/hello_sock.o 00:03:16.512 CC examples/thread/thread/thread_ex.o 00:03:16.512 CXX test/cpp_headers/barrier.o 00:03:16.512 LINK test_dma 00:03:16.512 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.512 LINK spdk_nvme_identify 00:03:16.512 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.512 LINK hello_sock 00:03:16.512 CXX test/cpp_headers/base64.o 00:03:16.770 LINK thread 00:03:16.770 LINK spdk_nvme 00:03:16.770 LINK spdk_nvme_perf 00:03:16.770 LINK spdk_top 00:03:16.770 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.770 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.770 CXX test/cpp_headers/bdev.o 00:03:16.770 CC app/fio/bdev/fio_plugin.o 00:03:16.770 CC app/vhost/vhost.o 00:03:16.770 LINK nvme_fuzz 00:03:16.770 CXX test/cpp_headers/bdev_module.o 00:03:17.028 CC test/app/histogram_perf/histogram_perf.o 00:03:17.028 CC test/app/jsoncat/jsoncat.o 00:03:17.028 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.028 CC examples/idxd/perf/perf.o 00:03:17.028 LINK lsvmd 00:03:17.028 CXX test/cpp_headers/bdev_zone.o 00:03:17.028 LINK vhost 00:03:17.028 LINK histogram_perf 00:03:17.028 LINK jsoncat 00:03:17.028 LINK vhost_fuzz 00:03:17.028 CC test/app/stub/stub.o 00:03:17.028 CXX test/cpp_headers/bit_array.o 00:03:17.286 CXX test/cpp_headers/bit_pool.o 00:03:17.286 CC examples/vmd/led/led.o 00:03:17.286 LINK spdk_bdev 00:03:17.286 LINK stub 00:03:17.286 LINK idxd_perf 00:03:17.286 CC test/event/event_perf/event_perf.o 00:03:17.286 LINK led 00:03:17.286 CXX test/cpp_headers/blob_bdev.o 00:03:17.286 CC test/nvme/reset/reset.o 00:03:17.286 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.286 CC test/nvme/aer/aer.o 00:03:17.286 CC test/nvme/sgl/sgl.o 00:03:17.286 LINK event_perf 00:03:17.286 CC test/rpc_client/rpc_client_test.o 00:03:17.544 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.544 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.544 LINK reset 00:03:17.544 CC examples/accel/perf/accel_perf.o 00:03:17.544 CC test/event/reactor/reactor.o 00:03:17.544 LINK rpc_client_test 00:03:17.544 LINK aer 00:03:17.544 CXX test/cpp_headers/blobfs.o 00:03:17.544 LINK sgl 00:03:17.544 LINK reactor 00:03:17.801 CXX test/cpp_headers/blob.o 00:03:17.801 CC test/nvme/e2edp/nvme_dp.o 00:03:17.801 LINK hello_fsdev 00:03:17.801 LINK mem_callbacks 00:03:17.801 CXX test/cpp_headers/conf.o 00:03:17.801 CC test/event/reactor_perf/reactor_perf.o 00:03:17.801 CC test/event/app_repeat/app_repeat.o 00:03:17.801 CXX test/cpp_headers/config.o 00:03:17.801 CC test/event/scheduler/scheduler.o 00:03:17.801 CC test/env/vtophys/vtophys.o 00:03:17.801 CXX test/cpp_headers/cpuset.o 00:03:17.801 LINK reactor_perf 00:03:17.801 LINK nvme_dp 00:03:18.059 LINK accel_perf 00:03:18.059 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.059 LINK app_repeat 00:03:18.059 CC examples/blob/hello_world/hello_blob.o 00:03:18.059 LINK scheduler 00:03:18.059 LINK vtophys 00:03:18.059 CXX test/cpp_headers/crc16.o 00:03:18.059 CXX test/cpp_headers/crc32.o 00:03:18.059 LINK env_dpdk_post_init 00:03:18.059 CC test/env/memory/memory_ut.o 00:03:18.059 CC test/nvme/overhead/overhead.o 00:03:18.059 CXX test/cpp_headers/crc64.o 00:03:18.059 CXX test/cpp_headers/dif.o 00:03:18.059 LINK hello_blob 00:03:18.059 CC examples/nvme/hello_world/hello_world.o 00:03:18.317 CC examples/nvme/reconnect/reconnect.o 00:03:18.317 CC examples/blob/cli/blobcli.o 00:03:18.317 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.317 CXX test/cpp_headers/dma.o 00:03:18.317 LINK iscsi_fuzz 00:03:18.317 CXX test/cpp_headers/endian.o 00:03:18.317 LINK hello_world 00:03:18.317 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:18.317 LINK overhead 00:03:18.576 LINK hello_bdev 00:03:18.576 CXX test/cpp_headers/env_dpdk.o 00:03:18.576 LINK reconnect 00:03:18.576 CC test/nvme/startup/startup.o 00:03:18.576 CC test/nvme/err_injection/err_injection.o 00:03:18.576 CC examples/nvme/arbitration/arbitration.o 00:03:18.576 CC examples/nvme/hotplug/hotplug.o 00:03:18.576 CXX test/cpp_headers/env.o 00:03:18.576 LINK blobcli 00:03:18.576 LINK startup 00:03:18.576 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.576 LINK err_injection 00:03:18.576 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:18.834 CXX test/cpp_headers/event.o 00:03:18.834 LINK hotplug 00:03:18.834 LINK arbitration 00:03:18.834 CXX test/cpp_headers/fd_group.o 00:03:18.834 LINK cmb_copy 00:03:18.834 LINK nvme_manage 00:03:18.834 CC test/nvme/reserve/reserve.o 00:03:18.834 LINK memory_ut 00:03:18.834 CC examples/nvme/abort/abort.o 00:03:19.092 CC test/blobfs/mkfs/mkfs.o 00:03:19.092 CC test/accel/dif/dif.o 00:03:19.092 CXX test/cpp_headers/fd.o 00:03:19.092 CC test/env/pci/pci_ut.o 00:03:19.092 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:19.092 LINK reserve 00:03:19.092 LINK mkfs 00:03:19.092 CC test/nvme/simple_copy/simple_copy.o 00:03:19.092 CXX test/cpp_headers/file.o 00:03:19.092 CC test/lvol/esnap/esnap.o 00:03:19.092 LINK pmr_persistence 00:03:19.350 LINK abort 00:03:19.350 CXX test/cpp_headers/fsdev.o 00:03:19.350 CC test/nvme/connect_stress/connect_stress.o 00:03:19.350 CXX test/cpp_headers/fsdev_module.o 00:03:19.350 LINK pci_ut 00:03:19.350 LINK simple_copy 00:03:19.350 CC test/nvme/boot_partition/boot_partition.o 00:03:19.350 CC test/nvme/compliance/nvme_compliance.o 00:03:19.350 CXX test/cpp_headers/ftl.o 00:03:19.350 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.350 LINK connect_stress 00:03:19.607 LINK bdevperf 00:03:19.607 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.607 LINK boot_partition 00:03:19.607 CC test/nvme/fdp/fdp.o 00:03:19.607 CXX test/cpp_headers/fuse_dispatcher.o 00:03:19.607 CC test/nvme/cuse/cuse.o 00:03:19.607 LINK fused_ordering 00:03:19.607 LINK nvme_compliance 00:03:19.607 CXX test/cpp_headers/gpt_spec.o 00:03:19.607 LINK dif 00:03:19.607 LINK doorbell_aers 00:03:19.607 CXX test/cpp_headers/hexlify.o 00:03:19.607 CXX test/cpp_headers/histogram_data.o 00:03:19.866 CXX test/cpp_headers/idxd.o 00:03:19.866 CXX test/cpp_headers/idxd_spec.o 00:03:19.866 CC examples/nvmf/nvmf/nvmf.o 00:03:19.866 CXX test/cpp_headers/init.o 00:03:19.866 CXX test/cpp_headers/ioat.o 00:03:19.866 CXX test/cpp_headers/ioat_spec.o 00:03:19.866 CXX test/cpp_headers/iscsi_spec.o 00:03:19.866 LINK fdp 00:03:19.866 CXX test/cpp_headers/json.o 00:03:19.866 CXX test/cpp_headers/jsonrpc.o 00:03:19.866 CXX test/cpp_headers/keyring.o 00:03:19.866 CXX test/cpp_headers/keyring_module.o 00:03:19.866 CXX test/cpp_headers/likely.o 00:03:19.866 CXX test/cpp_headers/log.o 00:03:20.125 CXX test/cpp_headers/lvol.o 00:03:20.125 LINK nvmf 00:03:20.125 CXX test/cpp_headers/md5.o 00:03:20.125 CXX test/cpp_headers/memory.o 00:03:20.125 CXX test/cpp_headers/mmio.o 00:03:20.125 CXX test/cpp_headers/nbd.o 00:03:20.125 CXX test/cpp_headers/net.o 00:03:20.125 CXX test/cpp_headers/notify.o 00:03:20.125 CXX test/cpp_headers/nvme.o 00:03:20.125 CXX test/cpp_headers/nvme_intel.o 00:03:20.125 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.125 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.125 CC test/bdev/bdevio/bdevio.o 00:03:20.125 CXX test/cpp_headers/nvme_spec.o 00:03:20.125 CXX test/cpp_headers/nvme_zns.o 00:03:20.125 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.125 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.413 CXX test/cpp_headers/nvmf.o 00:03:20.413 CXX test/cpp_headers/nvmf_spec.o 00:03:20.413 CXX test/cpp_headers/nvmf_transport.o 00:03:20.413 CXX test/cpp_headers/opal.o 00:03:20.414 CXX test/cpp_headers/opal_spec.o 00:03:20.414 CXX test/cpp_headers/pci_ids.o 00:03:20.414 CXX test/cpp_headers/pipe.o 00:03:20.414 CXX test/cpp_headers/queue.o 00:03:20.414 CXX test/cpp_headers/reduce.o 00:03:20.414 CXX test/cpp_headers/rpc.o 00:03:20.414 CXX test/cpp_headers/scheduler.o 00:03:20.414 CXX test/cpp_headers/scsi.o 00:03:20.414 CXX test/cpp_headers/scsi_spec.o 00:03:20.414 CXX test/cpp_headers/sock.o 00:03:20.414 CXX test/cpp_headers/stdinc.o 00:03:20.414 CXX test/cpp_headers/string.o 00:03:20.671 LINK bdevio 00:03:20.671 CXX test/cpp_headers/thread.o 00:03:20.671 CXX test/cpp_headers/trace.o 00:03:20.671 CXX test/cpp_headers/trace_parser.o 00:03:20.671 CXX test/cpp_headers/tree.o 00:03:20.671 CXX test/cpp_headers/ublk.o 00:03:20.671 CXX test/cpp_headers/util.o 00:03:20.671 CXX test/cpp_headers/uuid.o 00:03:20.671 CXX test/cpp_headers/version.o 00:03:20.671 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.671 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.671 CXX test/cpp_headers/vhost.o 00:03:20.671 CXX test/cpp_headers/vmd.o 00:03:20.671 CXX test/cpp_headers/xor.o 00:03:20.671 CXX test/cpp_headers/zipf.o 00:03:20.671 LINK cuse 00:03:24.865 LINK esnap 00:03:24.865 00:03:24.865 real 1m8.953s 00:03:24.865 user 6m12.837s 00:03:24.865 sys 1m11.322s 00:03:24.865 ************************************ 00:03:24.865 END TEST make 00:03:24.865 ************************************ 00:03:24.865 04:23:47 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:24.865 04:23:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.865 04:23:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.865 04:23:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.865 04:23:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.865 04:23:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.865 04:23:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.865 04:23:47 -- pm/common@44 -- $ pid=5070 00:03:24.865 04:23:47 -- pm/common@50 -- $ kill -TERM 5070 00:03:24.865 04:23:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.865 04:23:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.865 04:23:47 -- pm/common@44 -- $ pid=5071 00:03:24.865 04:23:47 -- pm/common@50 -- $ kill -TERM 5071 00:03:24.865 04:23:47 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.865 04:23:47 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:24.865 04:23:47 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:24.865 04:23:47 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:24.865 04:23:47 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:24.865 04:23:47 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:24.865 04:23:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.865 04:23:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.865 04:23:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.865 04:23:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.865 04:23:47 -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.865 04:23:47 -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.865 04:23:47 -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.865 04:23:47 -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.865 04:23:47 -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.865 04:23:47 -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.865 04:23:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.865 04:23:47 -- scripts/common.sh@344 -- # case "$op" in 00:03:24.865 04:23:47 -- scripts/common.sh@345 -- # : 1 00:03:24.865 04:23:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.865 04:23:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.865 04:23:47 -- scripts/common.sh@365 -- # decimal 1 00:03:24.865 04:23:47 -- scripts/common.sh@353 -- # local d=1 00:03:24.865 04:23:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.865 04:23:47 -- scripts/common.sh@355 -- # echo 1 00:03:24.865 04:23:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.865 04:23:47 -- scripts/common.sh@366 -- # decimal 2 00:03:24.865 04:23:47 -- scripts/common.sh@353 -- # local d=2 00:03:24.865 04:23:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.865 04:23:47 -- scripts/common.sh@355 -- # echo 2 00:03:24.865 04:23:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.865 04:23:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.865 04:23:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.865 04:23:47 -- scripts/common.sh@368 -- # return 0 00:03:24.865 04:23:47 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.865 04:23:47 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:24.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.865 --rc genhtml_branch_coverage=1 00:03:24.865 --rc genhtml_function_coverage=1 00:03:24.865 --rc genhtml_legend=1 00:03:24.865 --rc geninfo_all_blocks=1 00:03:24.865 --rc geninfo_unexecuted_blocks=1 00:03:24.865 00:03:24.865 ' 00:03:24.865 04:23:47 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:24.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.865 --rc genhtml_branch_coverage=1 00:03:24.865 --rc genhtml_function_coverage=1 00:03:24.865 --rc genhtml_legend=1 00:03:24.865 --rc geninfo_all_blocks=1 00:03:24.865 --rc geninfo_unexecuted_blocks=1 00:03:24.865 00:03:24.865 ' 00:03:24.865 04:23:47 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:24.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.865 --rc genhtml_branch_coverage=1 00:03:24.865 --rc genhtml_function_coverage=1 00:03:24.865 --rc genhtml_legend=1 00:03:24.865 --rc geninfo_all_blocks=1 00:03:24.865 --rc geninfo_unexecuted_blocks=1 00:03:24.865 00:03:24.865 ' 00:03:24.865 04:23:47 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:24.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.865 --rc genhtml_branch_coverage=1 00:03:24.865 --rc genhtml_function_coverage=1 00:03:24.865 --rc genhtml_legend=1 00:03:24.865 --rc geninfo_all_blocks=1 00:03:24.865 --rc geninfo_unexecuted_blocks=1 00:03:24.865 00:03:24.865 ' 00:03:24.865 04:23:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:24.865 04:23:47 -- nvmf/common.sh@7 -- # uname -s 00:03:24.865 04:23:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.865 04:23:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.865 04:23:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.865 04:23:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.865 04:23:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.865 04:23:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.865 04:23:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.865 04:23:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.865 04:23:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.865 04:23:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.865 04:23:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b476d71c-f613-4ce5-85f8-d410ab298fed 00:03:24.865 04:23:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=b476d71c-f613-4ce5-85f8-d410ab298fed 00:03:24.865 04:23:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.865 04:23:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.865 04:23:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:24.865 04:23:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.865 04:23:47 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:24.865 04:23:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:24.865 04:23:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.866 04:23:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.866 04:23:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.866 04:23:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.866 04:23:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.866 04:23:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.866 04:23:47 -- paths/export.sh@5 -- # export PATH 00:03:24.866 04:23:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.866 04:23:47 -- nvmf/common.sh@51 -- # : 0 00:03:24.866 04:23:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:24.866 04:23:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:24.866 04:23:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.866 04:23:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.866 04:23:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.866 04:23:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:24.866 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:24.866 04:23:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:24.866 04:23:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:24.866 04:23:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:24.866 04:23:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.866 04:23:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.866 04:23:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.866 04:23:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.866 04:23:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.866 04:23:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.866 04:23:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.866 04:23:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.866 04:23:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.866 04:23:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.866 04:23:47 -- spdk/autotest.sh@48 -- # udevadm_pid=54280 00:03:24.866 04:23:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.866 04:23:47 -- pm/common@17 -- # local monitor 00:03:24.866 04:23:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.866 04:23:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.866 04:23:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.866 04:23:47 -- pm/common@25 -- # sleep 1 00:03:24.866 04:23:47 -- pm/common@21 -- # date +%s 00:03:24.866 04:23:47 -- pm/common@21 -- # date +%s 00:03:24.866 04:23:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730607827 00:03:24.866 04:23:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730607827 00:03:24.866 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730607827_collect-vmstat.pm.log 00:03:24.866 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730607827_collect-cpu-load.pm.log 00:03:25.799 04:23:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.799 04:23:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.799 04:23:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:25.799 04:23:48 -- common/autotest_common.sh@10 -- # set +x 00:03:25.799 04:23:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.799 04:23:48 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:25.799 04:23:48 -- common/autotest_common.sh@10 -- # set +x 00:03:25.799 04:23:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:25.799 04:23:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:25.799 04:23:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:25.799 04:23:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:25.799 04:23:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:25.799 04:23:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.799 04:23:48 -- common/autotest_common.sh@1455 -- # uname 00:03:25.799 04:23:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:25.799 04:23:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.799 04:23:48 -- common/autotest_common.sh@1475 -- # uname 00:03:25.799 04:23:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:25.799 04:23:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:25.799 04:23:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.799 lcov: LCOV version 1.15 00:03:25.799 04:23:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:40.703 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:40.703 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.921 04:24:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:52.921 04:24:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.921 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:03:52.921 04:24:15 -- spdk/autotest.sh@78 -- # rm -f 00:03:52.921 04:24:15 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.443 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:53.443 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:53.443 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:53.704 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:53.704 04:24:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.704 04:24:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:53.704 04:24:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:53.704 04:24:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:53.704 04:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:53.704 04:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:53.704 04:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:53.704 04:24:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.704 04:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.704 04:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.704 04:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.704 04:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.704 04:24:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.704 No valid GPT data, bailing 00:03:53.704 04:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.704 04:24:16 -- scripts/common.sh@394 -- # pt= 00:03:53.704 04:24:16 -- scripts/common.sh@395 -- # return 1 00:03:53.704 04:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.704 1+0 records in 00:03:53.704 1+0 records out 00:03:53.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143732 s, 73.0 MB/s 00:03:53.704 04:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.704 04:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.704 04:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:53.704 04:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:53.704 04:24:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:53.704 No valid GPT data, bailing 00:03:53.704 04:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.704 04:24:16 -- scripts/common.sh@394 -- # pt= 00:03:53.704 04:24:16 -- scripts/common.sh@395 -- # return 1 00:03:53.704 04:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:53.704 1+0 records in 00:03:53.704 1+0 records out 00:03:53.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483115 s, 217 MB/s 00:03:53.704 04:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.704 04:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.704 04:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:53.704 04:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:53.704 04:24:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:53.704 No valid GPT data, bailing 00:03:53.704 04:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.704 04:24:16 -- scripts/common.sh@394 -- # pt= 00:03:53.704 04:24:16 -- scripts/common.sh@395 -- # return 1 00:03:53.704 04:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:53.704 1+0 records in 00:03:53.704 1+0 records out 00:03:53.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541438 s, 194 MB/s 00:03:53.705 04:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.705 04:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.705 04:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:53.705 04:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:53.705 04:24:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:53.966 No valid GPT data, bailing 00:03:53.966 04:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.966 04:24:16 -- scripts/common.sh@394 -- # pt= 00:03:53.966 04:24:16 -- scripts/common.sh@395 -- # return 1 00:03:53.966 04:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:53.966 1+0 records in 00:03:53.966 1+0 records out 00:03:53.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565087 s, 186 MB/s 00:03:53.966 04:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.966 04:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.966 04:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:53.966 04:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:53.966 04:24:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:53.966 No valid GPT data, bailing 00:03:53.966 04:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:53.966 04:24:16 -- scripts/common.sh@394 -- # pt= 00:03:53.966 04:24:16 -- scripts/common.sh@395 -- # return 1 00:03:53.966 04:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:53.966 1+0 records in 00:03:53.966 1+0 records out 00:03:53.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490843 s, 214 MB/s 00:03:53.966 04:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.966 04:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.966 04:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:53.966 04:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:53.966 04:24:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:53.966 No valid GPT data, bailing 00:03:53.966 04:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:53.966 04:24:16 -- scripts/common.sh@394 -- # pt= 00:03:53.966 04:24:16 -- scripts/common.sh@395 -- # return 1 00:03:53.966 04:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:53.966 1+0 records in 00:03:53.966 1+0 records out 00:03:53.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535258 s, 196 MB/s 00:03:53.966 04:24:16 -- spdk/autotest.sh@105 -- # sync 00:03:54.226 04:24:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.226 04:24:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.226 04:24:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.162 04:24:18 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.162 04:24:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.162 04:24:18 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.162 04:24:18 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:56.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.992 Hugepages 00:03:56.992 node hugesize free / total 00:03:56.992 node0 1048576kB 0 / 0 00:03:56.992 node0 2048kB 0 / 0 00:03:56.992 00:03:56.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.992 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:56.992 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:56.993 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:56.993 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:57.251 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:57.251 04:24:20 -- spdk/autotest.sh@117 -- # uname -s 00:03:57.251 04:24:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:57.251 04:24:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:57.251 04:24:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:57.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.083 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.083 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.344 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.344 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.344 04:24:21 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:59.283 04:24:22 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:59.283 04:24:22 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:59.283 04:24:22 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.283 04:24:22 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:59.283 04:24:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:59.283 04:24:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:59.283 04:24:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.283 04:24:22 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:59.283 04:24:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:59.283 04:24:22 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:59.283 04:24:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:59.283 04:24:22 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.855 Waiting for block devices as requested 00:03:59.855 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.118 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.118 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.118 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.413 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:05.413 04:24:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.413 04:24:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:05.413 04:24:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.413 04:24:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.414 04:24:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:05.414 04:24:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.414 04:24:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1541 -- # continue 00:04:05.414 04:24:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.414 04:24:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1541 -- # continue 00:04:05.414 04:24:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.414 04:24:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1541 -- # continue 00:04:05.414 04:24:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:05.414 04:24:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.414 04:24:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.414 04:24:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.414 04:24:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.414 04:24:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.414 04:24:28 -- common/autotest_common.sh@1541 -- # continue 00:04:05.414 04:24:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:05.414 04:24:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.414 04:24:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.414 04:24:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:05.414 04:24:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.414 04:24:28 -- common/autotest_common.sh@10 -- # set +x 00:04:05.414 04:24:28 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.561 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.561 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.561 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.561 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.561 04:24:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.561 04:24:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.561 04:24:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.823 04:24:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.823 04:24:29 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:06.823 04:24:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.823 04:24:29 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:06.823 04:24:29 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:06.823 04:24:29 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:06.823 04:24:29 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.823 04:24:29 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:06.823 04:24:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:06.823 04:24:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:06.823 04:24:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.823 04:24:29 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.823 04:24:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:06.823 04:24:29 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:06.823 04:24:29 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:06.823 04:24:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:06.823 04:24:29 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.823 04:24:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:06.823 04:24:29 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.823 04:24:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:06.823 04:24:29 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.823 04:24:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:06.823 04:24:29 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:06.823 04:24:29 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.823 04:24:29 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:06.823 04:24:29 -- common/autotest_common.sh@1570 -- # return 0 00:04:06.823 04:24:29 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:06.823 04:24:29 -- common/autotest_common.sh@1578 -- # return 0 00:04:06.823 04:24:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:06.823 04:24:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:06.823 04:24:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.823 04:24:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.823 04:24:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:06.823 04:24:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.823 04:24:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.823 04:24:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:06.823 04:24:29 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.823 04:24:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.823 04:24:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.823 04:24:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.823 ************************************ 00:04:06.823 START TEST env 00:04:06.823 ************************************ 00:04:06.823 04:24:29 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.823 * Looking for test storage... 00:04:06.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:06.823 04:24:29 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.823 04:24:29 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.823 04:24:29 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.085 04:24:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.085 04:24:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.085 04:24:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.085 04:24:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.085 04:24:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.085 04:24:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.085 04:24:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.085 04:24:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.085 04:24:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.085 04:24:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.085 04:24:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.085 04:24:29 env -- scripts/common.sh@344 -- # case "$op" in 00:04:07.085 04:24:29 env -- scripts/common.sh@345 -- # : 1 00:04:07.085 04:24:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.085 04:24:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.085 04:24:29 env -- scripts/common.sh@365 -- # decimal 1 00:04:07.085 04:24:29 env -- scripts/common.sh@353 -- # local d=1 00:04:07.085 04:24:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.085 04:24:29 env -- scripts/common.sh@355 -- # echo 1 00:04:07.085 04:24:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.085 04:24:29 env -- scripts/common.sh@366 -- # decimal 2 00:04:07.085 04:24:29 env -- scripts/common.sh@353 -- # local d=2 00:04:07.085 04:24:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.085 04:24:29 env -- scripts/common.sh@355 -- # echo 2 00:04:07.085 04:24:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.085 04:24:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.085 04:24:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.085 04:24:29 env -- scripts/common.sh@368 -- # return 0 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.085 --rc genhtml_branch_coverage=1 00:04:07.085 --rc genhtml_function_coverage=1 00:04:07.085 --rc genhtml_legend=1 00:04:07.085 --rc geninfo_all_blocks=1 00:04:07.085 --rc geninfo_unexecuted_blocks=1 00:04:07.085 00:04:07.085 ' 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.085 --rc genhtml_branch_coverage=1 00:04:07.085 --rc genhtml_function_coverage=1 00:04:07.085 --rc genhtml_legend=1 00:04:07.085 --rc geninfo_all_blocks=1 00:04:07.085 --rc geninfo_unexecuted_blocks=1 00:04:07.085 00:04:07.085 ' 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.085 --rc genhtml_branch_coverage=1 00:04:07.085 --rc genhtml_function_coverage=1 00:04:07.085 --rc genhtml_legend=1 00:04:07.085 --rc geninfo_all_blocks=1 00:04:07.085 --rc geninfo_unexecuted_blocks=1 00:04:07.085 00:04:07.085 ' 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.085 --rc genhtml_branch_coverage=1 00:04:07.085 --rc genhtml_function_coverage=1 00:04:07.085 --rc genhtml_legend=1 00:04:07.085 --rc geninfo_all_blocks=1 00:04:07.085 --rc geninfo_unexecuted_blocks=1 00:04:07.085 00:04:07.085 ' 00:04:07.085 04:24:29 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.085 04:24:29 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.085 04:24:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.085 ************************************ 00:04:07.085 START TEST env_memory 00:04:07.085 ************************************ 00:04:07.085 04:24:29 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.085 00:04:07.085 00:04:07.085 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.085 http://cunit.sourceforge.net/ 00:04:07.085 00:04:07.085 00:04:07.085 Suite: memory 00:04:07.085 Test: alloc and free memory map ...[2024-11-03 04:24:29.999321] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.085 passed 00:04:07.085 Test: mem map translation ...[2024-11-03 04:24:30.040665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.085 [2024-11-03 04:24:30.040866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.085 [2024-11-03 04:24:30.040983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.085 [2024-11-03 04:24:30.041039] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.085 passed 00:04:07.085 Test: mem map registration ...[2024-11-03 04:24:30.109576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:07.085 [2024-11-03 04:24:30.109719] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:07.085 passed 00:04:07.347 Test: mem map adjacent registrations ...passed 00:04:07.347 00:04:07.347 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.347 suites 1 1 n/a 0 0 00:04:07.347 tests 4 4 4 0 0 00:04:07.347 asserts 152 152 152 0 n/a 00:04:07.347 00:04:07.347 Elapsed time = 0.236 seconds 00:04:07.347 00:04:07.347 real 0m0.272s 00:04:07.347 user 0m0.241s 00:04:07.347 sys 0m0.022s 00:04:07.347 ************************************ 00:04:07.347 END TEST env_memory 00:04:07.347 ************************************ 00:04:07.347 04:24:30 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.347 04:24:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.347 04:24:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.347 04:24:30 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.347 04:24:30 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.347 04:24:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.347 ************************************ 00:04:07.347 START TEST env_vtophys 00:04:07.347 ************************************ 00:04:07.347 04:24:30 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.347 EAL: lib.eal log level changed from notice to debug 00:04:07.347 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 1 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 2 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 3 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 4 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 5 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 6 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 7 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 8 as core 0 on socket 0 00:04:07.347 EAL: Detected lcore 9 as core 0 on socket 0 00:04:07.347 EAL: Maximum logical cores by configuration: 128 00:04:07.347 EAL: Detected CPU lcores: 10 00:04:07.347 EAL: Detected NUMA nodes: 1 00:04:07.347 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.347 EAL: Detected shared linkage of DPDK 00:04:07.347 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.347 EAL: Selected IOVA mode 'PA' 00:04:07.347 EAL: Probing VFIO support... 00:04:07.347 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.347 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:07.347 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.347 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.347 EAL: Setting up physically contiguous memory... 00:04:07.347 EAL: Setting maximum number of open files to 524288 00:04:07.347 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.347 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.347 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.347 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.347 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.347 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.347 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.347 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.347 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.347 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.347 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.347 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.347 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.347 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.347 EAL: Hugepages will be freed exactly as allocated. 00:04:07.347 EAL: No shared files mode enabled, IPC is disabled 00:04:07.347 EAL: No shared files mode enabled, IPC is disabled 00:04:07.608 EAL: TSC frequency is ~2600000 KHz 00:04:07.609 EAL: Main lcore 0 is ready (tid=7f5f432c9a40;cpuset=[0]) 00:04:07.609 EAL: Trying to obtain current memory policy. 00:04:07.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.609 EAL: Restoring previous memory policy: 0 00:04:07.609 EAL: request: mp_malloc_sync 00:04:07.609 EAL: No shared files mode enabled, IPC is disabled 00:04:07.609 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.609 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.609 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.609 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.609 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:07.609 00:04:07.609 00:04:07.609 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.609 http://cunit.sourceforge.net/ 00:04:07.609 00:04:07.609 00:04:07.609 Suite: components_suite 00:04:07.896 Test: vtophys_malloc_test ...passed 00:04:07.896 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.896 EAL: Restoring previous memory policy: 4 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.896 EAL: Trying to obtain current memory policy. 00:04:07.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.896 EAL: Restoring previous memory policy: 4 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.896 EAL: Trying to obtain current memory policy. 00:04:07.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.896 EAL: Restoring previous memory policy: 4 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.896 EAL: Trying to obtain current memory policy. 00:04:07.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.896 EAL: Restoring previous memory policy: 4 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.896 EAL: Trying to obtain current memory policy. 00:04:07.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.896 EAL: Restoring previous memory policy: 4 00:04:07.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.896 EAL: request: mp_malloc_sync 00:04:07.896 EAL: No shared files mode enabled, IPC is disabled 00:04:07.896 EAL: Heap on socket 0 was expanded by 34MB 00:04:08.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.157 EAL: request: mp_malloc_sync 00:04:08.157 EAL: No shared files mode enabled, IPC is disabled 00:04:08.157 EAL: Heap on socket 0 was shrunk by 34MB 00:04:08.157 EAL: Trying to obtain current memory policy. 00:04:08.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.157 EAL: Restoring previous memory policy: 4 00:04:08.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.157 EAL: request: mp_malloc_sync 00:04:08.157 EAL: No shared files mode enabled, IPC is disabled 00:04:08.157 EAL: Heap on socket 0 was expanded by 66MB 00:04:08.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.157 EAL: request: mp_malloc_sync 00:04:08.157 EAL: No shared files mode enabled, IPC is disabled 00:04:08.157 EAL: Heap on socket 0 was shrunk by 66MB 00:04:08.157 EAL: Trying to obtain current memory policy. 00:04:08.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.157 EAL: Restoring previous memory policy: 4 00:04:08.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.157 EAL: request: mp_malloc_sync 00:04:08.157 EAL: No shared files mode enabled, IPC is disabled 00:04:08.157 EAL: Heap on socket 0 was expanded by 130MB 00:04:08.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.419 EAL: request: mp_malloc_sync 00:04:08.419 EAL: No shared files mode enabled, IPC is disabled 00:04:08.419 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.680 EAL: Trying to obtain current memory policy. 00:04:08.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.680 EAL: Restoring previous memory policy: 4 00:04:08.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.680 EAL: request: mp_malloc_sync 00:04:08.680 EAL: No shared files mode enabled, IPC is disabled 00:04:08.680 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.942 EAL: request: mp_malloc_sync 00:04:08.942 EAL: No shared files mode enabled, IPC is disabled 00:04:08.942 EAL: Heap on socket 0 was shrunk by 258MB 00:04:09.203 EAL: Trying to obtain current memory policy. 00:04:09.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.203 EAL: Restoring previous memory policy: 4 00:04:09.203 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.203 EAL: request: mp_malloc_sync 00:04:09.203 EAL: No shared files mode enabled, IPC is disabled 00:04:09.203 EAL: Heap on socket 0 was expanded by 514MB 00:04:10.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.146 EAL: request: mp_malloc_sync 00:04:10.146 EAL: No shared files mode enabled, IPC is disabled 00:04:10.146 EAL: Heap on socket 0 was shrunk by 514MB 00:04:10.405 EAL: Trying to obtain current memory policy. 00:04:10.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.664 EAL: Restoring previous memory policy: 4 00:04:10.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.664 EAL: request: mp_malloc_sync 00:04:10.664 EAL: No shared files mode enabled, IPC is disabled 00:04:10.664 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.597 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.597 EAL: request: mp_malloc_sync 00:04:11.597 EAL: No shared files mode enabled, IPC is disabled 00:04:11.597 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.534 passed 00:04:12.534 00:04:12.534 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.534 suites 1 1 n/a 0 0 00:04:12.534 tests 2 2 2 0 0 00:04:12.534 asserts 5803 5803 5803 0 n/a 00:04:12.534 00:04:12.534 Elapsed time = 4.836 seconds 00:04:12.534 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.534 EAL: request: mp_malloc_sync 00:04:12.534 EAL: No shared files mode enabled, IPC is disabled 00:04:12.534 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.534 EAL: No shared files mode enabled, IPC is disabled 00:04:12.534 EAL: No shared files mode enabled, IPC is disabled 00:04:12.534 EAL: No shared files mode enabled, IPC is disabled 00:04:12.534 00:04:12.534 real 0m5.118s 00:04:12.534 user 0m4.160s 00:04:12.534 sys 0m0.805s 00:04:12.534 ************************************ 00:04:12.534 END TEST env_vtophys 00:04:12.534 ************************************ 00:04:12.534 04:24:35 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.534 04:24:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.534 04:24:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.534 04:24:35 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.534 04:24:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.534 04:24:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.534 ************************************ 00:04:12.534 START TEST env_pci 00:04:12.534 ************************************ 00:04:12.534 04:24:35 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.534 00:04:12.534 00:04:12.534 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.534 http://cunit.sourceforge.net/ 00:04:12.534 00:04:12.534 00:04:12.534 Suite: pci 00:04:12.534 Test: pci_hook ...[2024-11-03 04:24:35.478941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57041 has claimed it 00:04:12.534 passed 00:04:12.534 00:04:12.534 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.534 suites 1 1 n/a 0 0 00:04:12.534 tests 1 1 1 0 0 00:04:12.534 asserts 25 25 25 0 n/a 00:04:12.534 00:04:12.534 Elapsed time = 0.006 seconds 00:04:12.534 EAL: Cannot find device (10000:00:01.0) 00:04:12.534 EAL: Failed to attach device on primary process 00:04:12.534 00:04:12.534 real 0m0.065s 00:04:12.534 user 0m0.033s 00:04:12.534 sys 0m0.029s 00:04:12.534 04:24:35 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.534 ************************************ 00:04:12.534 END TEST env_pci 00:04:12.534 ************************************ 00:04:12.534 04:24:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.534 04:24:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.534 04:24:35 env -- env/env.sh@15 -- # uname 00:04:12.534 04:24:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.534 04:24:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.534 04:24:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.534 04:24:35 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:12.534 04:24:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.534 04:24:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.534 ************************************ 00:04:12.534 START TEST env_dpdk_post_init 00:04:12.534 ************************************ 00:04:12.534 04:24:35 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.796 EAL: Detected CPU lcores: 10 00:04:12.796 EAL: Detected NUMA nodes: 1 00:04:12.796 EAL: Detected shared linkage of DPDK 00:04:12.796 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.796 EAL: Selected IOVA mode 'PA' 00:04:12.796 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:12.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:12.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:12.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:12.796 Starting DPDK initialization... 00:04:12.796 Starting SPDK post initialization... 00:04:12.796 SPDK NVMe probe 00:04:12.796 Attaching to 0000:00:10.0 00:04:12.796 Attaching to 0000:00:11.0 00:04:12.796 Attaching to 0000:00:12.0 00:04:12.796 Attaching to 0000:00:13.0 00:04:12.796 Attached to 0000:00:10.0 00:04:12.796 Attached to 0000:00:11.0 00:04:12.796 Attached to 0000:00:13.0 00:04:12.796 Attached to 0000:00:12.0 00:04:12.796 Cleaning up... 00:04:12.796 00:04:12.796 real 0m0.236s 00:04:12.796 user 0m0.070s 00:04:12.796 sys 0m0.068s 00:04:12.796 04:24:35 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.796 04:24:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.796 ************************************ 00:04:12.796 END TEST env_dpdk_post_init 00:04:12.796 ************************************ 00:04:12.796 04:24:35 env -- env/env.sh@26 -- # uname 00:04:12.796 04:24:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.796 04:24:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.796 04:24:35 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.796 04:24:35 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.796 04:24:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.796 ************************************ 00:04:12.796 START TEST env_mem_callbacks 00:04:12.796 ************************************ 00:04:12.796 04:24:35 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.055 EAL: Detected CPU lcores: 10 00:04:13.055 EAL: Detected NUMA nodes: 1 00:04:13.055 EAL: Detected shared linkage of DPDK 00:04:13.055 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.055 EAL: Selected IOVA mode 'PA' 00:04:13.055 00:04:13.055 00:04:13.055 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.055 http://cunit.sourceforge.net/ 00:04:13.055 00:04:13.055 00:04:13.055 Suite: memory 00:04:13.055 Test: test ... 00:04:13.055 register 0x200000200000 2097152 00:04:13.055 malloc 3145728 00:04:13.055 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.055 register 0x200000400000 4194304 00:04:13.055 buf 0x2000004fffc0 len 3145728 PASSED 00:04:13.055 malloc 64 00:04:13.055 buf 0x2000004ffec0 len 64 PASSED 00:04:13.055 malloc 4194304 00:04:13.055 register 0x200000800000 6291456 00:04:13.055 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.055 free 0x2000004fffc0 3145728 00:04:13.055 free 0x2000004ffec0 64 00:04:13.055 unregister 0x200000400000 4194304 PASSED 00:04:13.055 free 0x2000009fffc0 4194304 00:04:13.055 unregister 0x200000800000 6291456 PASSED 00:04:13.055 malloc 8388608 00:04:13.055 register 0x200000400000 10485760 00:04:13.055 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.055 free 0x2000005fffc0 8388608 00:04:13.055 unregister 0x200000400000 10485760 PASSED 00:04:13.055 passed 00:04:13.055 00:04:13.055 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.055 suites 1 1 n/a 0 0 00:04:13.055 tests 1 1 1 0 0 00:04:13.055 asserts 15 15 15 0 n/a 00:04:13.055 00:04:13.055 Elapsed time = 0.040 seconds 00:04:13.055 00:04:13.055 real 0m0.196s 00:04:13.055 user 0m0.049s 00:04:13.055 sys 0m0.046s 00:04:13.055 04:24:36 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.055 04:24:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.055 ************************************ 00:04:13.055 END TEST env_mem_callbacks 00:04:13.055 ************************************ 00:04:13.055 00:04:13.055 real 0m6.340s 00:04:13.055 user 0m4.697s 00:04:13.055 sys 0m1.196s 00:04:13.055 ************************************ 00:04:13.055 END TEST env 00:04:13.055 ************************************ 00:04:13.055 04:24:36 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.055 04:24:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.315 04:24:36 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.315 04:24:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.315 04:24:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.315 04:24:36 -- common/autotest_common.sh@10 -- # set +x 00:04:13.315 ************************************ 00:04:13.315 START TEST rpc 00:04:13.315 ************************************ 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.315 * Looking for test storage... 00:04:13.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.315 04:24:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.315 04:24:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.315 04:24:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.315 04:24:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.315 04:24:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.315 04:24:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.315 04:24:36 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.315 04:24:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.315 04:24:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.315 04:24:36 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.315 04:24:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.315 04:24:36 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.315 04:24:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.315 04:24:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.315 04:24:36 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.315 --rc genhtml_branch_coverage=1 00:04:13.315 --rc genhtml_function_coverage=1 00:04:13.315 --rc genhtml_legend=1 00:04:13.315 --rc geninfo_all_blocks=1 00:04:13.315 --rc geninfo_unexecuted_blocks=1 00:04:13.315 00:04:13.315 ' 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.315 --rc genhtml_branch_coverage=1 00:04:13.315 --rc genhtml_function_coverage=1 00:04:13.315 --rc genhtml_legend=1 00:04:13.315 --rc geninfo_all_blocks=1 00:04:13.315 --rc geninfo_unexecuted_blocks=1 00:04:13.315 00:04:13.315 ' 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.315 --rc genhtml_branch_coverage=1 00:04:13.315 --rc genhtml_function_coverage=1 00:04:13.315 --rc genhtml_legend=1 00:04:13.315 --rc geninfo_all_blocks=1 00:04:13.315 --rc geninfo_unexecuted_blocks=1 00:04:13.315 00:04:13.315 ' 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.315 --rc genhtml_branch_coverage=1 00:04:13.315 --rc genhtml_function_coverage=1 00:04:13.315 --rc genhtml_legend=1 00:04:13.315 --rc geninfo_all_blocks=1 00:04:13.315 --rc geninfo_unexecuted_blocks=1 00:04:13.315 00:04:13.315 ' 00:04:13.315 04:24:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57168 00:04:13.315 04:24:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.315 04:24:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57168 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@833 -- # '[' -z 57168 ']' 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.315 04:24:36 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:13.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.315 04:24:36 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.316 04:24:36 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:13.316 04:24:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.574 [2024-11-03 04:24:36.395778] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:13.574 [2024-11-03 04:24:36.395879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57168 ] 00:04:13.574 [2024-11-03 04:24:36.540029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.574 [2024-11-03 04:24:36.626261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.574 [2024-11-03 04:24:36.626307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57168' to capture a snapshot of events at runtime. 00:04:13.574 [2024-11-03 04:24:36.626315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.574 [2024-11-03 04:24:36.626323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.574 [2024-11-03 04:24:36.626329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57168 for offline analysis/debug. 00:04:13.574 [2024-11-03 04:24:36.627178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.509 04:24:37 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.509 04:24:37 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:14.509 04:24:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.509 04:24:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.509 04:24:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.509 04:24:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.509 04:24:37 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.509 04:24:37 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.509 04:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.509 ************************************ 00:04:14.509 START TEST rpc_integrity 00:04:14.509 ************************************ 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.509 { 00:04:14.509 "name": "Malloc0", 00:04:14.509 "aliases": [ 00:04:14.509 "313f5db6-9595-406c-93f4-3adf49ff8ae5" 00:04:14.509 ], 00:04:14.509 "product_name": "Malloc disk", 00:04:14.509 "block_size": 512, 00:04:14.509 "num_blocks": 16384, 00:04:14.509 "uuid": "313f5db6-9595-406c-93f4-3adf49ff8ae5", 00:04:14.509 "assigned_rate_limits": { 00:04:14.509 "rw_ios_per_sec": 0, 00:04:14.509 "rw_mbytes_per_sec": 0, 00:04:14.509 "r_mbytes_per_sec": 0, 00:04:14.509 "w_mbytes_per_sec": 0 00:04:14.509 }, 00:04:14.509 "claimed": false, 00:04:14.509 "zoned": false, 00:04:14.509 "supported_io_types": { 00:04:14.509 "read": true, 00:04:14.509 "write": true, 00:04:14.509 "unmap": true, 00:04:14.509 "flush": true, 00:04:14.509 "reset": true, 00:04:14.509 "nvme_admin": false, 00:04:14.509 "nvme_io": false, 00:04:14.509 "nvme_io_md": false, 00:04:14.509 "write_zeroes": true, 00:04:14.509 "zcopy": true, 00:04:14.509 "get_zone_info": false, 00:04:14.509 "zone_management": false, 00:04:14.509 "zone_append": false, 00:04:14.509 "compare": false, 00:04:14.509 "compare_and_write": false, 00:04:14.509 "abort": true, 00:04:14.509 "seek_hole": false, 00:04:14.509 "seek_data": false, 00:04:14.509 "copy": true, 00:04:14.509 "nvme_iov_md": false 00:04:14.509 }, 00:04:14.509 "memory_domains": [ 00:04:14.509 { 00:04:14.509 "dma_device_id": "system", 00:04:14.509 "dma_device_type": 1 00:04:14.509 }, 00:04:14.509 { 00:04:14.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.509 "dma_device_type": 2 00:04:14.509 } 00:04:14.509 ], 00:04:14.509 "driver_specific": {} 00:04:14.509 } 00:04:14.509 ]' 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.509 [2024-11-03 04:24:37.359490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.509 [2024-11-03 04:24:37.359541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.509 [2024-11-03 04:24:37.359572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:14.509 [2024-11-03 04:24:37.359582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.509 [2024-11-03 04:24:37.361351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.509 [2024-11-03 04:24:37.361387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.509 Passthru0 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.509 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.509 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.509 { 00:04:14.509 "name": "Malloc0", 00:04:14.509 "aliases": [ 00:04:14.509 "313f5db6-9595-406c-93f4-3adf49ff8ae5" 00:04:14.509 ], 00:04:14.509 "product_name": "Malloc disk", 00:04:14.509 "block_size": 512, 00:04:14.509 "num_blocks": 16384, 00:04:14.509 "uuid": "313f5db6-9595-406c-93f4-3adf49ff8ae5", 00:04:14.509 "assigned_rate_limits": { 00:04:14.509 "rw_ios_per_sec": 0, 00:04:14.509 "rw_mbytes_per_sec": 0, 00:04:14.509 "r_mbytes_per_sec": 0, 00:04:14.509 "w_mbytes_per_sec": 0 00:04:14.509 }, 00:04:14.509 "claimed": true, 00:04:14.509 "claim_type": "exclusive_write", 00:04:14.509 "zoned": false, 00:04:14.509 "supported_io_types": { 00:04:14.509 "read": true, 00:04:14.509 "write": true, 00:04:14.509 "unmap": true, 00:04:14.509 "flush": true, 00:04:14.509 "reset": true, 00:04:14.509 "nvme_admin": false, 00:04:14.509 "nvme_io": false, 00:04:14.509 "nvme_io_md": false, 00:04:14.509 "write_zeroes": true, 00:04:14.509 "zcopy": true, 00:04:14.509 "get_zone_info": false, 00:04:14.509 "zone_management": false, 00:04:14.509 "zone_append": false, 00:04:14.509 "compare": false, 00:04:14.509 "compare_and_write": false, 00:04:14.509 "abort": true, 00:04:14.509 "seek_hole": false, 00:04:14.509 "seek_data": false, 00:04:14.509 "copy": true, 00:04:14.509 "nvme_iov_md": false 00:04:14.509 }, 00:04:14.509 "memory_domains": [ 00:04:14.509 { 00:04:14.509 "dma_device_id": "system", 00:04:14.509 "dma_device_type": 1 00:04:14.509 }, 00:04:14.509 { 00:04:14.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.509 "dma_device_type": 2 00:04:14.509 } 00:04:14.509 ], 00:04:14.509 "driver_specific": {} 00:04:14.509 }, 00:04:14.509 { 00:04:14.509 "name": "Passthru0", 00:04:14.509 "aliases": [ 00:04:14.509 "30b2e892-4b98-576f-a91f-bffdcce7df31" 00:04:14.509 ], 00:04:14.509 "product_name": "passthru", 00:04:14.509 "block_size": 512, 00:04:14.510 "num_blocks": 16384, 00:04:14.510 "uuid": "30b2e892-4b98-576f-a91f-bffdcce7df31", 00:04:14.510 "assigned_rate_limits": { 00:04:14.510 "rw_ios_per_sec": 0, 00:04:14.510 "rw_mbytes_per_sec": 0, 00:04:14.510 "r_mbytes_per_sec": 0, 00:04:14.510 "w_mbytes_per_sec": 0 00:04:14.510 }, 00:04:14.510 "claimed": false, 00:04:14.510 "zoned": false, 00:04:14.510 "supported_io_types": { 00:04:14.510 "read": true, 00:04:14.510 "write": true, 00:04:14.510 "unmap": true, 00:04:14.510 "flush": true, 00:04:14.510 "reset": true, 00:04:14.510 "nvme_admin": false, 00:04:14.510 "nvme_io": false, 00:04:14.510 "nvme_io_md": false, 00:04:14.510 "write_zeroes": true, 00:04:14.510 "zcopy": true, 00:04:14.510 "get_zone_info": false, 00:04:14.510 "zone_management": false, 00:04:14.510 "zone_append": false, 00:04:14.510 "compare": false, 00:04:14.510 "compare_and_write": false, 00:04:14.510 "abort": true, 00:04:14.510 "seek_hole": false, 00:04:14.510 "seek_data": false, 00:04:14.510 "copy": true, 00:04:14.510 "nvme_iov_md": false 00:04:14.510 }, 00:04:14.510 "memory_domains": [ 00:04:14.510 { 00:04:14.510 "dma_device_id": "system", 00:04:14.510 "dma_device_type": 1 00:04:14.510 }, 00:04:14.510 { 00:04:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.510 "dma_device_type": 2 00:04:14.510 } 00:04:14.510 ], 00:04:14.510 "driver_specific": { 00:04:14.510 "passthru": { 00:04:14.510 "name": "Passthru0", 00:04:14.510 "base_bdev_name": "Malloc0" 00:04:14.510 } 00:04:14.510 } 00:04:14.510 } 00:04:14.510 ]' 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.510 ************************************ 00:04:14.510 END TEST rpc_integrity 00:04:14.510 ************************************ 00:04:14.510 04:24:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.510 00:04:14.510 real 0m0.242s 00:04:14.510 user 0m0.130s 00:04:14.510 sys 0m0.034s 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 04:24:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.510 04:24:37 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.510 04:24:37 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.510 04:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 ************************************ 00:04:14.510 START TEST rpc_plugins 00:04:14.510 ************************************ 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.510 { 00:04:14.510 "name": "Malloc1", 00:04:14.510 "aliases": [ 00:04:14.510 "e5fa7a1c-ed97-4c4d-bc23-49dc1bf0eca2" 00:04:14.510 ], 00:04:14.510 "product_name": "Malloc disk", 00:04:14.510 "block_size": 4096, 00:04:14.510 "num_blocks": 256, 00:04:14.510 "uuid": "e5fa7a1c-ed97-4c4d-bc23-49dc1bf0eca2", 00:04:14.510 "assigned_rate_limits": { 00:04:14.510 "rw_ios_per_sec": 0, 00:04:14.510 "rw_mbytes_per_sec": 0, 00:04:14.510 "r_mbytes_per_sec": 0, 00:04:14.510 "w_mbytes_per_sec": 0 00:04:14.510 }, 00:04:14.510 "claimed": false, 00:04:14.510 "zoned": false, 00:04:14.510 "supported_io_types": { 00:04:14.510 "read": true, 00:04:14.510 "write": true, 00:04:14.510 "unmap": true, 00:04:14.510 "flush": true, 00:04:14.510 "reset": true, 00:04:14.510 "nvme_admin": false, 00:04:14.510 "nvme_io": false, 00:04:14.510 "nvme_io_md": false, 00:04:14.510 "write_zeroes": true, 00:04:14.510 "zcopy": true, 00:04:14.510 "get_zone_info": false, 00:04:14.510 "zone_management": false, 00:04:14.510 "zone_append": false, 00:04:14.510 "compare": false, 00:04:14.510 "compare_and_write": false, 00:04:14.510 "abort": true, 00:04:14.510 "seek_hole": false, 00:04:14.510 "seek_data": false, 00:04:14.510 "copy": true, 00:04:14.510 "nvme_iov_md": false 00:04:14.510 }, 00:04:14.510 "memory_domains": [ 00:04:14.510 { 00:04:14.510 "dma_device_id": "system", 00:04:14.510 "dma_device_type": 1 00:04:14.510 }, 00:04:14.510 { 00:04:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.510 "dma_device_type": 2 00:04:14.510 } 00:04:14.510 ], 00:04:14.510 "driver_specific": {} 00:04:14.510 } 00:04:14.510 ]' 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.510 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.510 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.769 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.769 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.769 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.769 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.769 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.769 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.769 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.769 ************************************ 00:04:14.769 END TEST rpc_plugins 00:04:14.769 ************************************ 00:04:14.769 04:24:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.769 00:04:14.769 real 0m0.118s 00:04:14.769 user 0m0.061s 00:04:14.769 sys 0m0.019s 00:04:14.769 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.769 04:24:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.769 04:24:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.769 04:24:37 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.769 04:24:37 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.769 04:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.769 ************************************ 00:04:14.769 START TEST rpc_trace_cmd_test 00:04:14.769 ************************************ 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.769 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57168", 00:04:14.769 "tpoint_group_mask": "0x8", 00:04:14.769 "iscsi_conn": { 00:04:14.769 "mask": "0x2", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "scsi": { 00:04:14.769 "mask": "0x4", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "bdev": { 00:04:14.769 "mask": "0x8", 00:04:14.769 "tpoint_mask": "0xffffffffffffffff" 00:04:14.769 }, 00:04:14.769 "nvmf_rdma": { 00:04:14.769 "mask": "0x10", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "nvmf_tcp": { 00:04:14.769 "mask": "0x20", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "ftl": { 00:04:14.769 "mask": "0x40", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "blobfs": { 00:04:14.769 "mask": "0x80", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "dsa": { 00:04:14.769 "mask": "0x200", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "thread": { 00:04:14.769 "mask": "0x400", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "nvme_pcie": { 00:04:14.769 "mask": "0x800", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "iaa": { 00:04:14.769 "mask": "0x1000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "nvme_tcp": { 00:04:14.769 "mask": "0x2000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "bdev_nvme": { 00:04:14.769 "mask": "0x4000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "sock": { 00:04:14.769 "mask": "0x8000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "blob": { 00:04:14.769 "mask": "0x10000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "bdev_raid": { 00:04:14.769 "mask": "0x20000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 }, 00:04:14.769 "scheduler": { 00:04:14.769 "mask": "0x40000", 00:04:14.769 "tpoint_mask": "0x0" 00:04:14.769 } 00:04:14.769 }' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.769 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.028 ************************************ 00:04:15.028 END TEST rpc_trace_cmd_test 00:04:15.028 ************************************ 00:04:15.028 04:24:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.028 00:04:15.028 real 0m0.172s 00:04:15.028 user 0m0.139s 00:04:15.028 sys 0m0.024s 00:04:15.028 04:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.028 04:24:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 04:24:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.028 04:24:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.028 04:24:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.028 04:24:37 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.028 04:24:37 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.028 04:24:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 ************************************ 00:04:15.028 START TEST rpc_daemon_integrity 00:04:15.028 ************************************ 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.028 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.028 { 00:04:15.028 "name": "Malloc2", 00:04:15.028 "aliases": [ 00:04:15.028 "67ab3d78-3c48-41ff-bf15-ad50e00b8b76" 00:04:15.028 ], 00:04:15.028 "product_name": "Malloc disk", 00:04:15.028 "block_size": 512, 00:04:15.028 "num_blocks": 16384, 00:04:15.028 "uuid": "67ab3d78-3c48-41ff-bf15-ad50e00b8b76", 00:04:15.028 "assigned_rate_limits": { 00:04:15.028 "rw_ios_per_sec": 0, 00:04:15.028 "rw_mbytes_per_sec": 0, 00:04:15.028 "r_mbytes_per_sec": 0, 00:04:15.028 "w_mbytes_per_sec": 0 00:04:15.028 }, 00:04:15.028 "claimed": false, 00:04:15.028 "zoned": false, 00:04:15.028 "supported_io_types": { 00:04:15.028 "read": true, 00:04:15.028 "write": true, 00:04:15.028 "unmap": true, 00:04:15.028 "flush": true, 00:04:15.028 "reset": true, 00:04:15.028 "nvme_admin": false, 00:04:15.028 "nvme_io": false, 00:04:15.028 "nvme_io_md": false, 00:04:15.028 "write_zeroes": true, 00:04:15.028 "zcopy": true, 00:04:15.028 "get_zone_info": false, 00:04:15.028 "zone_management": false, 00:04:15.028 "zone_append": false, 00:04:15.028 "compare": false, 00:04:15.029 "compare_and_write": false, 00:04:15.029 "abort": true, 00:04:15.029 "seek_hole": false, 00:04:15.029 "seek_data": false, 00:04:15.029 "copy": true, 00:04:15.029 "nvme_iov_md": false 00:04:15.029 }, 00:04:15.029 "memory_domains": [ 00:04:15.029 { 00:04:15.029 "dma_device_id": "system", 00:04:15.029 "dma_device_type": 1 00:04:15.029 }, 00:04:15.029 { 00:04:15.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.029 "dma_device_type": 2 00:04:15.029 } 00:04:15.029 ], 00:04:15.029 "driver_specific": {} 00:04:15.029 } 00:04:15.029 ]' 00:04:15.029 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.029 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.029 04:24:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.029 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.029 04:24:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.029 [2024-11-03 04:24:37.999568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.029 [2024-11-03 04:24:37.999609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.029 [2024-11-03 04:24:37.999624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:15.029 [2024-11-03 04:24:37.999632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.029 [2024-11-03 04:24:38.001284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.029 [2024-11-03 04:24:38.001315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.029 Passthru0 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.029 { 00:04:15.029 "name": "Malloc2", 00:04:15.029 "aliases": [ 00:04:15.029 "67ab3d78-3c48-41ff-bf15-ad50e00b8b76" 00:04:15.029 ], 00:04:15.029 "product_name": "Malloc disk", 00:04:15.029 "block_size": 512, 00:04:15.029 "num_blocks": 16384, 00:04:15.029 "uuid": "67ab3d78-3c48-41ff-bf15-ad50e00b8b76", 00:04:15.029 "assigned_rate_limits": { 00:04:15.029 "rw_ios_per_sec": 0, 00:04:15.029 "rw_mbytes_per_sec": 0, 00:04:15.029 "r_mbytes_per_sec": 0, 00:04:15.029 "w_mbytes_per_sec": 0 00:04:15.029 }, 00:04:15.029 "claimed": true, 00:04:15.029 "claim_type": "exclusive_write", 00:04:15.029 "zoned": false, 00:04:15.029 "supported_io_types": { 00:04:15.029 "read": true, 00:04:15.029 "write": true, 00:04:15.029 "unmap": true, 00:04:15.029 "flush": true, 00:04:15.029 "reset": true, 00:04:15.029 "nvme_admin": false, 00:04:15.029 "nvme_io": false, 00:04:15.029 "nvme_io_md": false, 00:04:15.029 "write_zeroes": true, 00:04:15.029 "zcopy": true, 00:04:15.029 "get_zone_info": false, 00:04:15.029 "zone_management": false, 00:04:15.029 "zone_append": false, 00:04:15.029 "compare": false, 00:04:15.029 "compare_and_write": false, 00:04:15.029 "abort": true, 00:04:15.029 "seek_hole": false, 00:04:15.029 "seek_data": false, 00:04:15.029 "copy": true, 00:04:15.029 "nvme_iov_md": false 00:04:15.029 }, 00:04:15.029 "memory_domains": [ 00:04:15.029 { 00:04:15.029 "dma_device_id": "system", 00:04:15.029 "dma_device_type": 1 00:04:15.029 }, 00:04:15.029 { 00:04:15.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.029 "dma_device_type": 2 00:04:15.029 } 00:04:15.029 ], 00:04:15.029 "driver_specific": {} 00:04:15.029 }, 00:04:15.029 { 00:04:15.029 "name": "Passthru0", 00:04:15.029 "aliases": [ 00:04:15.029 "60d9bab6-19b4-557e-a89e-6d0db74dca95" 00:04:15.029 ], 00:04:15.029 "product_name": "passthru", 00:04:15.029 "block_size": 512, 00:04:15.029 "num_blocks": 16384, 00:04:15.029 "uuid": "60d9bab6-19b4-557e-a89e-6d0db74dca95", 00:04:15.029 "assigned_rate_limits": { 00:04:15.029 "rw_ios_per_sec": 0, 00:04:15.029 "rw_mbytes_per_sec": 0, 00:04:15.029 "r_mbytes_per_sec": 0, 00:04:15.029 "w_mbytes_per_sec": 0 00:04:15.029 }, 00:04:15.029 "claimed": false, 00:04:15.029 "zoned": false, 00:04:15.029 "supported_io_types": { 00:04:15.029 "read": true, 00:04:15.029 "write": true, 00:04:15.029 "unmap": true, 00:04:15.029 "flush": true, 00:04:15.029 "reset": true, 00:04:15.029 "nvme_admin": false, 00:04:15.029 "nvme_io": false, 00:04:15.029 "nvme_io_md": false, 00:04:15.029 "write_zeroes": true, 00:04:15.029 "zcopy": true, 00:04:15.029 "get_zone_info": false, 00:04:15.029 "zone_management": false, 00:04:15.029 "zone_append": false, 00:04:15.029 "compare": false, 00:04:15.029 "compare_and_write": false, 00:04:15.029 "abort": true, 00:04:15.029 "seek_hole": false, 00:04:15.029 "seek_data": false, 00:04:15.029 "copy": true, 00:04:15.029 "nvme_iov_md": false 00:04:15.029 }, 00:04:15.029 "memory_domains": [ 00:04:15.029 { 00:04:15.029 "dma_device_id": "system", 00:04:15.029 "dma_device_type": 1 00:04:15.029 }, 00:04:15.029 { 00:04:15.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.029 "dma_device_type": 2 00:04:15.029 } 00:04:15.029 ], 00:04:15.029 "driver_specific": { 00:04:15.029 "passthru": { 00:04:15.029 "name": "Passthru0", 00:04:15.029 "base_bdev_name": "Malloc2" 00:04:15.029 } 00:04:15.029 } 00:04:15.029 } 00:04:15.029 ]' 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.029 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.288 ************************************ 00:04:15.288 END TEST rpc_daemon_integrity 00:04:15.288 ************************************ 00:04:15.288 04:24:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.288 00:04:15.288 real 0m0.236s 00:04:15.288 user 0m0.122s 00:04:15.288 sys 0m0.039s 00:04:15.288 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.288 04:24:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.288 04:24:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.288 04:24:38 rpc -- rpc/rpc.sh@84 -- # killprocess 57168 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@952 -- # '[' -z 57168 ']' 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@956 -- # kill -0 57168 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@957 -- # uname 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57168 00:04:15.288 killing process with pid 57168 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57168' 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@971 -- # kill 57168 00:04:15.288 04:24:38 rpc -- common/autotest_common.sh@976 -- # wait 57168 00:04:16.667 ************************************ 00:04:16.667 END TEST rpc 00:04:16.667 ************************************ 00:04:16.667 00:04:16.667 real 0m3.170s 00:04:16.667 user 0m3.571s 00:04:16.667 sys 0m0.616s 00:04:16.667 04:24:39 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.667 04:24:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.667 04:24:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:16.667 04:24:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.668 04:24:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.668 04:24:39 -- common/autotest_common.sh@10 -- # set +x 00:04:16.668 ************************************ 00:04:16.668 START TEST skip_rpc 00:04:16.668 ************************************ 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:16.668 * Looking for test storage... 00:04:16.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.668 04:24:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.668 --rc genhtml_branch_coverage=1 00:04:16.668 --rc genhtml_function_coverage=1 00:04:16.668 --rc genhtml_legend=1 00:04:16.668 --rc geninfo_all_blocks=1 00:04:16.668 --rc geninfo_unexecuted_blocks=1 00:04:16.668 00:04:16.668 ' 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.668 --rc genhtml_branch_coverage=1 00:04:16.668 --rc genhtml_function_coverage=1 00:04:16.668 --rc genhtml_legend=1 00:04:16.668 --rc geninfo_all_blocks=1 00:04:16.668 --rc geninfo_unexecuted_blocks=1 00:04:16.668 00:04:16.668 ' 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.668 --rc genhtml_branch_coverage=1 00:04:16.668 --rc genhtml_function_coverage=1 00:04:16.668 --rc genhtml_legend=1 00:04:16.668 --rc geninfo_all_blocks=1 00:04:16.668 --rc geninfo_unexecuted_blocks=1 00:04:16.668 00:04:16.668 ' 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.668 --rc genhtml_branch_coverage=1 00:04:16.668 --rc genhtml_function_coverage=1 00:04:16.668 --rc genhtml_legend=1 00:04:16.668 --rc geninfo_all_blocks=1 00:04:16.668 --rc geninfo_unexecuted_blocks=1 00:04:16.668 00:04:16.668 ' 00:04:16.668 04:24:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.668 04:24:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.668 04:24:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.668 04:24:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.668 ************************************ 00:04:16.668 START TEST skip_rpc 00:04:16.668 ************************************ 00:04:16.668 04:24:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:16.668 04:24:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57375 00:04:16.668 04:24:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.668 04:24:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.668 04:24:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.668 [2024-11-03 04:24:39.640461] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:16.668 [2024-11-03 04:24:39.640624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57375 ] 00:04:16.933 [2024-11-03 04:24:39.802767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.933 [2024-11-03 04:24:39.931093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57375 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57375 ']' 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57375 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57375 00:04:22.206 killing process with pid 57375 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57375' 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57375 00:04:22.206 04:24:44 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57375 00:04:22.775 ************************************ 00:04:22.775 END TEST skip_rpc 00:04:22.775 ************************************ 00:04:22.775 00:04:22.775 real 0m6.196s 00:04:22.775 user 0m5.710s 00:04:22.775 sys 0m0.383s 00:04:22.775 04:24:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.775 04:24:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.775 04:24:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:22.775 04:24:45 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.775 04:24:45 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.775 04:24:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.775 ************************************ 00:04:22.775 START TEST skip_rpc_with_json 00:04:22.775 ************************************ 00:04:22.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57468 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57468 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57468 ']' 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.775 04:24:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.033 [2024-11-03 04:24:45.874053] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:23.033 [2024-11-03 04:24:45.874336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57468 ] 00:04:23.033 [2024-11-03 04:24:46.032889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.290 [2024-11-03 04:24:46.125173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.855 [2024-11-03 04:24:46.704966] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:23.855 request: 00:04:23.855 { 00:04:23.855 "trtype": "tcp", 00:04:23.855 "method": "nvmf_get_transports", 00:04:23.855 "req_id": 1 00:04:23.855 } 00:04:23.855 Got JSON-RPC error response 00:04:23.855 response: 00:04:23.855 { 00:04:23.855 "code": -19, 00:04:23.855 "message": "No such device" 00:04:23.855 } 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.855 [2024-11-03 04:24:46.717058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.855 04:24:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.855 { 00:04:23.855 "subsystems": [ 00:04:23.855 { 00:04:23.855 "subsystem": "fsdev", 00:04:23.855 "config": [ 00:04:23.855 { 00:04:23.855 "method": "fsdev_set_opts", 00:04:23.855 "params": { 00:04:23.855 "fsdev_io_pool_size": 65535, 00:04:23.855 "fsdev_io_cache_size": 256 00:04:23.855 } 00:04:23.855 } 00:04:23.855 ] 00:04:23.855 }, 00:04:23.855 { 00:04:23.855 "subsystem": "keyring", 00:04:23.855 "config": [] 00:04:23.855 }, 00:04:23.855 { 00:04:23.855 "subsystem": "iobuf", 00:04:23.855 "config": [ 00:04:23.855 { 00:04:23.855 "method": "iobuf_set_options", 00:04:23.855 "params": { 00:04:23.855 "small_pool_count": 8192, 00:04:23.855 "large_pool_count": 1024, 00:04:23.855 "small_bufsize": 8192, 00:04:23.855 "large_bufsize": 135168, 00:04:23.855 "enable_numa": false 00:04:23.855 } 00:04:23.855 } 00:04:23.855 ] 00:04:23.855 }, 00:04:23.855 { 00:04:23.855 "subsystem": "sock", 00:04:23.855 "config": [ 00:04:23.855 { 00:04:23.855 "method": "sock_set_default_impl", 00:04:23.855 "params": { 00:04:23.855 "impl_name": "posix" 00:04:23.855 } 00:04:23.855 }, 00:04:23.855 { 00:04:23.856 "method": "sock_impl_set_options", 00:04:23.856 "params": { 00:04:23.856 "impl_name": "ssl", 00:04:23.856 "recv_buf_size": 4096, 00:04:23.856 "send_buf_size": 4096, 00:04:23.856 "enable_recv_pipe": true, 00:04:23.856 "enable_quickack": false, 00:04:23.856 "enable_placement_id": 0, 00:04:23.856 "enable_zerocopy_send_server": true, 00:04:23.856 "enable_zerocopy_send_client": false, 00:04:23.856 "zerocopy_threshold": 0, 00:04:23.856 "tls_version": 0, 00:04:23.856 "enable_ktls": false 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "sock_impl_set_options", 00:04:23.856 "params": { 00:04:23.856 "impl_name": "posix", 00:04:23.856 "recv_buf_size": 2097152, 00:04:23.856 "send_buf_size": 2097152, 00:04:23.856 "enable_recv_pipe": true, 00:04:23.856 "enable_quickack": false, 00:04:23.856 "enable_placement_id": 0, 00:04:23.856 "enable_zerocopy_send_server": true, 00:04:23.856 "enable_zerocopy_send_client": false, 00:04:23.856 "zerocopy_threshold": 0, 00:04:23.856 "tls_version": 0, 00:04:23.856 "enable_ktls": false 00:04:23.856 } 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "vmd", 00:04:23.856 "config": [] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "accel", 00:04:23.856 "config": [ 00:04:23.856 { 00:04:23.856 "method": "accel_set_options", 00:04:23.856 "params": { 00:04:23.856 "small_cache_size": 128, 00:04:23.856 "large_cache_size": 16, 00:04:23.856 "task_count": 2048, 00:04:23.856 "sequence_count": 2048, 00:04:23.856 "buf_count": 2048 00:04:23.856 } 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "bdev", 00:04:23.856 "config": [ 00:04:23.856 { 00:04:23.856 "method": "bdev_set_options", 00:04:23.856 "params": { 00:04:23.856 "bdev_io_pool_size": 65535, 00:04:23.856 "bdev_io_cache_size": 256, 00:04:23.856 "bdev_auto_examine": true, 00:04:23.856 "iobuf_small_cache_size": 128, 00:04:23.856 "iobuf_large_cache_size": 16 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "bdev_raid_set_options", 00:04:23.856 "params": { 00:04:23.856 "process_window_size_kb": 1024, 00:04:23.856 "process_max_bandwidth_mb_sec": 0 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "bdev_iscsi_set_options", 00:04:23.856 "params": { 00:04:23.856 "timeout_sec": 30 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "bdev_nvme_set_options", 00:04:23.856 "params": { 00:04:23.856 "action_on_timeout": "none", 00:04:23.856 "timeout_us": 0, 00:04:23.856 "timeout_admin_us": 0, 00:04:23.856 "keep_alive_timeout_ms": 10000, 00:04:23.856 "arbitration_burst": 0, 00:04:23.856 "low_priority_weight": 0, 00:04:23.856 "medium_priority_weight": 0, 00:04:23.856 "high_priority_weight": 0, 00:04:23.856 "nvme_adminq_poll_period_us": 10000, 00:04:23.856 "nvme_ioq_poll_period_us": 0, 00:04:23.856 "io_queue_requests": 0, 00:04:23.856 "delay_cmd_submit": true, 00:04:23.856 "transport_retry_count": 4, 00:04:23.856 "bdev_retry_count": 3, 00:04:23.856 "transport_ack_timeout": 0, 00:04:23.856 "ctrlr_loss_timeout_sec": 0, 00:04:23.856 "reconnect_delay_sec": 0, 00:04:23.856 "fast_io_fail_timeout_sec": 0, 00:04:23.856 "disable_auto_failback": false, 00:04:23.856 "generate_uuids": false, 00:04:23.856 "transport_tos": 0, 00:04:23.856 "nvme_error_stat": false, 00:04:23.856 "rdma_srq_size": 0, 00:04:23.856 "io_path_stat": false, 00:04:23.856 "allow_accel_sequence": false, 00:04:23.856 "rdma_max_cq_size": 0, 00:04:23.856 "rdma_cm_event_timeout_ms": 0, 00:04:23.856 "dhchap_digests": [ 00:04:23.856 "sha256", 00:04:23.856 "sha384", 00:04:23.856 "sha512" 00:04:23.856 ], 00:04:23.856 "dhchap_dhgroups": [ 00:04:23.856 "null", 00:04:23.856 "ffdhe2048", 00:04:23.856 "ffdhe3072", 00:04:23.856 "ffdhe4096", 00:04:23.856 "ffdhe6144", 00:04:23.856 "ffdhe8192" 00:04:23.856 ] 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "bdev_nvme_set_hotplug", 00:04:23.856 "params": { 00:04:23.856 "period_us": 100000, 00:04:23.856 "enable": false 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "bdev_wait_for_examine" 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "scsi", 00:04:23.856 "config": null 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "scheduler", 00:04:23.856 "config": [ 00:04:23.856 { 00:04:23.856 "method": "framework_set_scheduler", 00:04:23.856 "params": { 00:04:23.856 "name": "static" 00:04:23.856 } 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "vhost_scsi", 00:04:23.856 "config": [] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "vhost_blk", 00:04:23.856 "config": [] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "ublk", 00:04:23.856 "config": [] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "nbd", 00:04:23.856 "config": [] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "nvmf", 00:04:23.856 "config": [ 00:04:23.856 { 00:04:23.856 "method": "nvmf_set_config", 00:04:23.856 "params": { 00:04:23.856 "discovery_filter": "match_any", 00:04:23.856 "admin_cmd_passthru": { 00:04:23.856 "identify_ctrlr": false 00:04:23.856 }, 00:04:23.856 "dhchap_digests": [ 00:04:23.856 "sha256", 00:04:23.856 "sha384", 00:04:23.856 "sha512" 00:04:23.856 ], 00:04:23.856 "dhchap_dhgroups": [ 00:04:23.856 "null", 00:04:23.856 "ffdhe2048", 00:04:23.856 "ffdhe3072", 00:04:23.856 "ffdhe4096", 00:04:23.856 "ffdhe6144", 00:04:23.856 "ffdhe8192" 00:04:23.856 ] 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "nvmf_set_max_subsystems", 00:04:23.856 "params": { 00:04:23.856 "max_subsystems": 1024 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "nvmf_set_crdt", 00:04:23.856 "params": { 00:04:23.856 "crdt1": 0, 00:04:23.856 "crdt2": 0, 00:04:23.856 "crdt3": 0 00:04:23.856 } 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "method": "nvmf_create_transport", 00:04:23.856 "params": { 00:04:23.856 "trtype": "TCP", 00:04:23.856 "max_queue_depth": 128, 00:04:23.856 "max_io_qpairs_per_ctrlr": 127, 00:04:23.856 "in_capsule_data_size": 4096, 00:04:23.856 "max_io_size": 131072, 00:04:23.856 "io_unit_size": 131072, 00:04:23.856 "max_aq_depth": 128, 00:04:23.856 "num_shared_buffers": 511, 00:04:23.856 "buf_cache_size": 4294967295, 00:04:23.856 "dif_insert_or_strip": false, 00:04:23.856 "zcopy": false, 00:04:23.856 "c2h_success": true, 00:04:23.856 "sock_priority": 0, 00:04:23.856 "abort_timeout_sec": 1, 00:04:23.856 "ack_timeout": 0, 00:04:23.856 "data_wr_pool_size": 0 00:04:23.856 } 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 }, 00:04:23.856 { 00:04:23.856 "subsystem": "iscsi", 00:04:23.856 "config": [ 00:04:23.856 { 00:04:23.856 "method": "iscsi_set_options", 00:04:23.856 "params": { 00:04:23.856 "node_base": "iqn.2016-06.io.spdk", 00:04:23.856 "max_sessions": 128, 00:04:23.856 "max_connections_per_session": 2, 00:04:23.856 "max_queue_depth": 64, 00:04:23.856 "default_time2wait": 2, 00:04:23.856 "default_time2retain": 20, 00:04:23.856 "first_burst_length": 8192, 00:04:23.856 "immediate_data": true, 00:04:23.856 "allow_duplicated_isid": false, 00:04:23.856 "error_recovery_level": 0, 00:04:23.856 "nop_timeout": 60, 00:04:23.856 "nop_in_interval": 30, 00:04:23.856 "disable_chap": false, 00:04:23.856 "require_chap": false, 00:04:23.856 "mutual_chap": false, 00:04:23.856 "chap_group": 0, 00:04:23.856 "max_large_datain_per_connection": 64, 00:04:23.856 "max_r2t_per_connection": 4, 00:04:23.856 "pdu_pool_size": 36864, 00:04:23.856 "immediate_data_pool_size": 16384, 00:04:23.856 "data_out_pool_size": 2048 00:04:23.856 } 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 } 00:04:23.856 ] 00:04:23.856 } 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57468 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57468 ']' 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57468 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57468 00:04:23.856 killing process with pid 57468 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57468' 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57468 00:04:23.856 04:24:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57468 00:04:25.228 04:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57508 00:04:25.228 04:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:25.228 04:24:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57508 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57508 ']' 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57508 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57508 00:04:30.489 killing process with pid 57508 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57508' 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57508 00:04:30.489 04:24:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57508 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.424 00:04:31.424 real 0m8.475s 00:04:31.424 user 0m8.072s 00:04:31.424 sys 0m0.622s 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.424 ************************************ 00:04:31.424 END TEST skip_rpc_with_json 00:04:31.424 ************************************ 00:04:31.424 04:24:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:31.424 04:24:54 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.424 04:24:54 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.424 04:24:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.424 ************************************ 00:04:31.424 START TEST skip_rpc_with_delay 00:04:31.424 ************************************ 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.424 [2024-11-03 04:24:54.392460] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.424 ************************************ 00:04:31.424 END TEST skip_rpc_with_delay 00:04:31.424 ************************************ 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.424 00:04:31.424 real 0m0.106s 00:04:31.424 user 0m0.062s 00:04:31.424 sys 0m0.043s 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.424 04:24:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:31.424 04:24:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:31.424 04:24:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:31.424 04:24:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:31.424 04:24:54 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.424 04:24:54 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.424 04:24:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.424 ************************************ 00:04:31.424 START TEST exit_on_failed_rpc_init 00:04:31.424 ************************************ 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57625 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57625 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57625 ']' 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.424 04:24:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.683 [2024-11-03 04:24:54.569437] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:31.683 [2024-11-03 04:24:54.569571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:04:31.683 [2024-11-03 04:24:54.728880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.941 [2024-11-03 04:24:54.812311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:32.508 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.508 [2024-11-03 04:24:55.468163] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:32.508 [2024-11-03 04:24:55.468281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57643 ] 00:04:32.767 [2024-11-03 04:24:55.631730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.767 [2024-11-03 04:24:55.748623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.767 [2024-11-03 04:24:55.748719] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:32.767 [2024-11-03 04:24:55.748733] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:32.767 [2024-11-03 04:24:55.748748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57625 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57625 ']' 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57625 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57625 00:04:33.026 killing process with pid 57625 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57625' 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57625 00:04:33.026 04:24:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57625 00:04:34.403 00:04:34.403 real 0m2.630s 00:04:34.403 user 0m2.977s 00:04:34.403 sys 0m0.392s 00:04:34.403 04:24:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.403 04:24:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.403 ************************************ 00:04:34.403 END TEST exit_on_failed_rpc_init 00:04:34.403 ************************************ 00:04:34.403 04:24:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.403 ************************************ 00:04:34.403 END TEST skip_rpc 00:04:34.403 ************************************ 00:04:34.403 00:04:34.403 real 0m17.773s 00:04:34.403 user 0m16.942s 00:04:34.403 sys 0m1.638s 00:04:34.403 04:24:57 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.403 04:24:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.403 04:24:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.403 04:24:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.403 04:24:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.403 04:24:57 -- common/autotest_common.sh@10 -- # set +x 00:04:34.403 ************************************ 00:04:34.403 START TEST rpc_client 00:04:34.403 ************************************ 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.403 * Looking for test storage... 00:04:34.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.403 04:24:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.403 --rc genhtml_branch_coverage=1 00:04:34.403 --rc genhtml_function_coverage=1 00:04:34.403 --rc genhtml_legend=1 00:04:34.403 --rc geninfo_all_blocks=1 00:04:34.403 --rc geninfo_unexecuted_blocks=1 00:04:34.403 00:04:34.403 ' 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.403 --rc genhtml_branch_coverage=1 00:04:34.403 --rc genhtml_function_coverage=1 00:04:34.403 --rc genhtml_legend=1 00:04:34.403 --rc geninfo_all_blocks=1 00:04:34.403 --rc geninfo_unexecuted_blocks=1 00:04:34.403 00:04:34.403 ' 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.403 --rc genhtml_branch_coverage=1 00:04:34.403 --rc genhtml_function_coverage=1 00:04:34.403 --rc genhtml_legend=1 00:04:34.403 --rc geninfo_all_blocks=1 00:04:34.403 --rc geninfo_unexecuted_blocks=1 00:04:34.403 00:04:34.403 ' 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.403 --rc genhtml_branch_coverage=1 00:04:34.403 --rc genhtml_function_coverage=1 00:04:34.403 --rc genhtml_legend=1 00:04:34.403 --rc geninfo_all_blocks=1 00:04:34.403 --rc geninfo_unexecuted_blocks=1 00:04:34.403 00:04:34.403 ' 00:04:34.403 04:24:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:34.403 OK 00:04:34.403 04:24:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:34.403 00:04:34.403 real 0m0.195s 00:04:34.403 user 0m0.113s 00:04:34.403 sys 0m0.088s 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.403 04:24:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:34.403 ************************************ 00:04:34.403 END TEST rpc_client 00:04:34.403 ************************************ 00:04:34.403 04:24:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.403 04:24:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.403 04:24:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.403 04:24:57 -- common/autotest_common.sh@10 -- # set +x 00:04:34.403 ************************************ 00:04:34.403 START TEST json_config 00:04:34.403 ************************************ 00:04:34.403 04:24:57 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.666 04:24:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.666 04:24:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.666 04:24:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.666 04:24:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.666 04:24:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.666 04:24:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:34.666 04:24:57 json_config -- scripts/common.sh@345 -- # : 1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.666 04:24:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.666 04:24:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@353 -- # local d=1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.666 04:24:57 json_config -- scripts/common.sh@355 -- # echo 1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.666 04:24:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@353 -- # local d=2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.666 04:24:57 json_config -- scripts/common.sh@355 -- # echo 2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.666 04:24:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.666 04:24:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.666 04:24:57 json_config -- scripts/common.sh@368 -- # return 0 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.666 --rc genhtml_branch_coverage=1 00:04:34.666 --rc genhtml_function_coverage=1 00:04:34.666 --rc genhtml_legend=1 00:04:34.666 --rc geninfo_all_blocks=1 00:04:34.666 --rc geninfo_unexecuted_blocks=1 00:04:34.666 00:04:34.666 ' 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.666 --rc genhtml_branch_coverage=1 00:04:34.666 --rc genhtml_function_coverage=1 00:04:34.666 --rc genhtml_legend=1 00:04:34.666 --rc geninfo_all_blocks=1 00:04:34.666 --rc geninfo_unexecuted_blocks=1 00:04:34.666 00:04:34.666 ' 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.666 --rc genhtml_branch_coverage=1 00:04:34.666 --rc genhtml_function_coverage=1 00:04:34.666 --rc genhtml_legend=1 00:04:34.666 --rc geninfo_all_blocks=1 00:04:34.666 --rc geninfo_unexecuted_blocks=1 00:04:34.666 00:04:34.666 ' 00:04:34.666 04:24:57 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.666 --rc genhtml_branch_coverage=1 00:04:34.666 --rc genhtml_function_coverage=1 00:04:34.666 --rc genhtml_legend=1 00:04:34.666 --rc geninfo_all_blocks=1 00:04:34.666 --rc geninfo_unexecuted_blocks=1 00:04:34.666 00:04:34.666 ' 00:04:34.666 04:24:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b476d71c-f613-4ce5-85f8-d410ab298fed 00:04:34.666 04:24:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b476d71c-f613-4ce5-85f8-d410ab298fed 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.667 04:24:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.667 04:24:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.667 04:24:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.667 04:24:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.667 04:24:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.667 04:24:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.667 04:24:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.667 04:24:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:34.667 04:24:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@51 -- # : 0 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.667 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.667 04:24:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:34.667 WARNING: No tests are enabled so not running JSON configuration tests 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:34.667 04:24:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:34.667 00:04:34.667 real 0m0.146s 00:04:34.667 user 0m0.091s 00:04:34.667 sys 0m0.056s 00:04:34.667 04:24:57 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.667 04:24:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.667 ************************************ 00:04:34.667 END TEST json_config 00:04:34.667 ************************************ 00:04:34.667 04:24:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:34.667 04:24:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.667 04:24:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.667 04:24:57 -- common/autotest_common.sh@10 -- # set +x 00:04:34.667 ************************************ 00:04:34.667 START TEST json_config_extra_key 00:04:34.667 ************************************ 00:04:34.667 04:24:57 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:34.667 04:24:57 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.667 04:24:57 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.667 04:24:57 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.927 04:24:57 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:34.927 04:24:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.927 04:24:57 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.927 --rc genhtml_branch_coverage=1 00:04:34.927 --rc genhtml_function_coverage=1 00:04:34.927 --rc genhtml_legend=1 00:04:34.927 --rc geninfo_all_blocks=1 00:04:34.927 --rc geninfo_unexecuted_blocks=1 00:04:34.927 00:04:34.927 ' 00:04:34.927 04:24:57 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.927 --rc genhtml_branch_coverage=1 00:04:34.927 --rc genhtml_function_coverage=1 00:04:34.927 --rc genhtml_legend=1 00:04:34.927 --rc geninfo_all_blocks=1 00:04:34.927 --rc geninfo_unexecuted_blocks=1 00:04:34.927 00:04:34.927 ' 00:04:34.927 04:24:57 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.927 --rc genhtml_branch_coverage=1 00:04:34.927 --rc genhtml_function_coverage=1 00:04:34.927 --rc genhtml_legend=1 00:04:34.927 --rc geninfo_all_blocks=1 00:04:34.927 --rc geninfo_unexecuted_blocks=1 00:04:34.927 00:04:34.927 ' 00:04:34.927 04:24:57 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.927 --rc genhtml_branch_coverage=1 00:04:34.927 --rc genhtml_function_coverage=1 00:04:34.927 --rc genhtml_legend=1 00:04:34.927 --rc geninfo_all_blocks=1 00:04:34.927 --rc geninfo_unexecuted_blocks=1 00:04:34.927 00:04:34.927 ' 00:04:34.927 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b476d71c-f613-4ce5-85f8-d410ab298fed 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b476d71c-f613-4ce5-85f8-d410ab298fed 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.927 04:24:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.927 04:24:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.928 04:24:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.928 04:24:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.928 04:24:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.928 04:24:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:34.928 04:24:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.928 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.928 04:24:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.928 INFO: launching applications... 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:34.928 04:24:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57836 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.928 Waiting for target to run... 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57836 /var/tmp/spdk_tgt.sock 00:04:34.928 04:24:57 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57836 ']' 00:04:34.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.928 04:24:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:34.928 04:24:57 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.928 04:24:57 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.928 04:24:57 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.928 04:24:57 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.928 04:24:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.928 [2024-11-03 04:24:57.857834] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:34.928 [2024-11-03 04:24:57.858057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57836 ] 00:04:35.186 [2024-11-03 04:24:58.165552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.186 [2024-11-03 04:24:58.238299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.753 00:04:35.753 INFO: shutting down applications... 00:04:35.753 04:24:58 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.753 04:24:58 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:35.753 04:24:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:35.753 04:24:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57836 ]] 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57836 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57836 00:04:35.753 04:24:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.319 04:24:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.319 04:24:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.319 04:24:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57836 00:04:36.319 04:24:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.886 04:24:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.886 04:24:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.886 04:24:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57836 00:04:36.886 04:24:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.145 04:25:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.145 04:25:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.145 04:25:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57836 00:04:37.145 04:25:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.146 04:25:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.146 SPDK target shutdown done 00:04:37.146 04:25:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.146 04:25:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.146 Success 00:04:37.146 04:25:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.146 00:04:37.146 real 0m2.560s 00:04:37.146 user 0m2.273s 00:04:37.146 sys 0m0.368s 00:04:37.146 ************************************ 00:04:37.146 END TEST json_config_extra_key 00:04:37.146 ************************************ 00:04:37.146 04:25:00 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.146 04:25:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.406 04:25:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.406 04:25:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.406 04:25:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.406 04:25:00 -- common/autotest_common.sh@10 -- # set +x 00:04:37.406 ************************************ 00:04:37.406 START TEST alias_rpc 00:04:37.406 ************************************ 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.406 * Looking for test storage... 00:04:37.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.406 04:25:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.406 --rc genhtml_branch_coverage=1 00:04:37.406 --rc genhtml_function_coverage=1 00:04:37.406 --rc genhtml_legend=1 00:04:37.406 --rc geninfo_all_blocks=1 00:04:37.406 --rc geninfo_unexecuted_blocks=1 00:04:37.406 00:04:37.406 ' 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.406 --rc genhtml_branch_coverage=1 00:04:37.406 --rc genhtml_function_coverage=1 00:04:37.406 --rc genhtml_legend=1 00:04:37.406 --rc geninfo_all_blocks=1 00:04:37.406 --rc geninfo_unexecuted_blocks=1 00:04:37.406 00:04:37.406 ' 00:04:37.406 04:25:00 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.406 --rc genhtml_branch_coverage=1 00:04:37.406 --rc genhtml_function_coverage=1 00:04:37.406 --rc genhtml_legend=1 00:04:37.407 --rc geninfo_all_blocks=1 00:04:37.407 --rc geninfo_unexecuted_blocks=1 00:04:37.407 00:04:37.407 ' 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.407 --rc genhtml_branch_coverage=1 00:04:37.407 --rc genhtml_function_coverage=1 00:04:37.407 --rc genhtml_legend=1 00:04:37.407 --rc geninfo_all_blocks=1 00:04:37.407 --rc geninfo_unexecuted_blocks=1 00:04:37.407 00:04:37.407 ' 00:04:37.407 04:25:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.407 04:25:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57923 00:04:37.407 04:25:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57923 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57923 ']' 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:37.407 04:25:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.407 04:25:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.666 [2024-11-03 04:25:00.501367] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:37.666 [2024-11-03 04:25:00.501512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57923 ] 00:04:37.666 [2024-11-03 04:25:00.662311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.924 [2024-11-03 04:25:00.750809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.490 04:25:01 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.490 04:25:01 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.490 04:25:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:38.748 04:25:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57923 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57923 ']' 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57923 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57923 00:04:38.748 killing process with pid 57923 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57923' 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@971 -- # kill 57923 00:04:38.748 04:25:01 alias_rpc -- common/autotest_common.sh@976 -- # wait 57923 00:04:39.685 ************************************ 00:04:39.685 END TEST alias_rpc 00:04:39.685 ************************************ 00:04:39.685 00:04:39.685 real 0m2.490s 00:04:39.685 user 0m2.602s 00:04:39.685 sys 0m0.415s 00:04:39.685 04:25:02 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.685 04:25:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.944 04:25:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.944 04:25:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.944 04:25:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.944 04:25:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.944 04:25:02 -- common/autotest_common.sh@10 -- # set +x 00:04:39.944 ************************************ 00:04:39.944 START TEST spdkcli_tcp 00:04:39.944 ************************************ 00:04:39.944 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.944 * Looking for test storage... 00:04:39.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:39.944 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.944 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.944 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.944 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.944 04:25:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.944 04:25:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.944 04:25:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.944 04:25:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.944 04:25:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.945 04:25:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.945 --rc genhtml_branch_coverage=1 00:04:39.945 --rc genhtml_function_coverage=1 00:04:39.945 --rc genhtml_legend=1 00:04:39.945 --rc geninfo_all_blocks=1 00:04:39.945 --rc geninfo_unexecuted_blocks=1 00:04:39.945 00:04:39.945 ' 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.945 --rc genhtml_branch_coverage=1 00:04:39.945 --rc genhtml_function_coverage=1 00:04:39.945 --rc genhtml_legend=1 00:04:39.945 --rc geninfo_all_blocks=1 00:04:39.945 --rc geninfo_unexecuted_blocks=1 00:04:39.945 00:04:39.945 ' 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.945 --rc genhtml_branch_coverage=1 00:04:39.945 --rc genhtml_function_coverage=1 00:04:39.945 --rc genhtml_legend=1 00:04:39.945 --rc geninfo_all_blocks=1 00:04:39.945 --rc geninfo_unexecuted_blocks=1 00:04:39.945 00:04:39.945 ' 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.945 --rc genhtml_branch_coverage=1 00:04:39.945 --rc genhtml_function_coverage=1 00:04:39.945 --rc genhtml_legend=1 00:04:39.945 --rc geninfo_all_blocks=1 00:04:39.945 --rc geninfo_unexecuted_blocks=1 00:04:39.945 00:04:39.945 ' 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58013 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58013 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58013 ']' 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.945 04:25:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.945 04:25:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.203 [2024-11-03 04:25:03.040062] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:40.203 [2024-11-03 04:25:03.040161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58013 ] 00:04:40.203 [2024-11-03 04:25:03.190438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.203 [2024-11-03 04:25:03.276277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.203 [2024-11-03 04:25:03.276384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.137 04:25:03 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.137 04:25:03 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:41.137 04:25:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:41.137 04:25:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58029 00:04:41.137 04:25:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.137 [ 00:04:41.137 "bdev_malloc_delete", 00:04:41.137 "bdev_malloc_create", 00:04:41.137 "bdev_null_resize", 00:04:41.137 "bdev_null_delete", 00:04:41.137 "bdev_null_create", 00:04:41.137 "bdev_nvme_cuse_unregister", 00:04:41.137 "bdev_nvme_cuse_register", 00:04:41.137 "bdev_opal_new_user", 00:04:41.137 "bdev_opal_set_lock_state", 00:04:41.137 "bdev_opal_delete", 00:04:41.137 "bdev_opal_get_info", 00:04:41.137 "bdev_opal_create", 00:04:41.137 "bdev_nvme_opal_revert", 00:04:41.138 "bdev_nvme_opal_init", 00:04:41.138 "bdev_nvme_send_cmd", 00:04:41.138 "bdev_nvme_set_keys", 00:04:41.138 "bdev_nvme_get_path_iostat", 00:04:41.138 "bdev_nvme_get_mdns_discovery_info", 00:04:41.138 "bdev_nvme_stop_mdns_discovery", 00:04:41.138 "bdev_nvme_start_mdns_discovery", 00:04:41.138 "bdev_nvme_set_multipath_policy", 00:04:41.138 "bdev_nvme_set_preferred_path", 00:04:41.138 "bdev_nvme_get_io_paths", 00:04:41.138 "bdev_nvme_remove_error_injection", 00:04:41.138 "bdev_nvme_add_error_injection", 00:04:41.138 "bdev_nvme_get_discovery_info", 00:04:41.138 "bdev_nvme_stop_discovery", 00:04:41.138 "bdev_nvme_start_discovery", 00:04:41.138 "bdev_nvme_get_controller_health_info", 00:04:41.138 "bdev_nvme_disable_controller", 00:04:41.138 "bdev_nvme_enable_controller", 00:04:41.138 "bdev_nvme_reset_controller", 00:04:41.138 "bdev_nvme_get_transport_statistics", 00:04:41.138 "bdev_nvme_apply_firmware", 00:04:41.138 "bdev_nvme_detach_controller", 00:04:41.138 "bdev_nvme_get_controllers", 00:04:41.138 "bdev_nvme_attach_controller", 00:04:41.138 "bdev_nvme_set_hotplug", 00:04:41.138 "bdev_nvme_set_options", 00:04:41.138 "bdev_passthru_delete", 00:04:41.138 "bdev_passthru_create", 00:04:41.138 "bdev_lvol_set_parent_bdev", 00:04:41.138 "bdev_lvol_set_parent", 00:04:41.138 "bdev_lvol_check_shallow_copy", 00:04:41.138 "bdev_lvol_start_shallow_copy", 00:04:41.138 "bdev_lvol_grow_lvstore", 00:04:41.138 "bdev_lvol_get_lvols", 00:04:41.138 "bdev_lvol_get_lvstores", 00:04:41.138 "bdev_lvol_delete", 00:04:41.138 "bdev_lvol_set_read_only", 00:04:41.138 "bdev_lvol_resize", 00:04:41.138 "bdev_lvol_decouple_parent", 00:04:41.138 "bdev_lvol_inflate", 00:04:41.138 "bdev_lvol_rename", 00:04:41.138 "bdev_lvol_clone_bdev", 00:04:41.138 "bdev_lvol_clone", 00:04:41.138 "bdev_lvol_snapshot", 00:04:41.138 "bdev_lvol_create", 00:04:41.138 "bdev_lvol_delete_lvstore", 00:04:41.138 "bdev_lvol_rename_lvstore", 00:04:41.138 "bdev_lvol_create_lvstore", 00:04:41.138 "bdev_raid_set_options", 00:04:41.138 "bdev_raid_remove_base_bdev", 00:04:41.138 "bdev_raid_add_base_bdev", 00:04:41.138 "bdev_raid_delete", 00:04:41.138 "bdev_raid_create", 00:04:41.138 "bdev_raid_get_bdevs", 00:04:41.138 "bdev_error_inject_error", 00:04:41.138 "bdev_error_delete", 00:04:41.138 "bdev_error_create", 00:04:41.138 "bdev_split_delete", 00:04:41.138 "bdev_split_create", 00:04:41.138 "bdev_delay_delete", 00:04:41.138 "bdev_delay_create", 00:04:41.138 "bdev_delay_update_latency", 00:04:41.138 "bdev_zone_block_delete", 00:04:41.138 "bdev_zone_block_create", 00:04:41.138 "blobfs_create", 00:04:41.138 "blobfs_detect", 00:04:41.138 "blobfs_set_cache_size", 00:04:41.138 "bdev_xnvme_delete", 00:04:41.138 "bdev_xnvme_create", 00:04:41.138 "bdev_aio_delete", 00:04:41.138 "bdev_aio_rescan", 00:04:41.138 "bdev_aio_create", 00:04:41.138 "bdev_ftl_set_property", 00:04:41.138 "bdev_ftl_get_properties", 00:04:41.138 "bdev_ftl_get_stats", 00:04:41.138 "bdev_ftl_unmap", 00:04:41.138 "bdev_ftl_unload", 00:04:41.138 "bdev_ftl_delete", 00:04:41.138 "bdev_ftl_load", 00:04:41.138 "bdev_ftl_create", 00:04:41.138 "bdev_virtio_attach_controller", 00:04:41.138 "bdev_virtio_scsi_get_devices", 00:04:41.138 "bdev_virtio_detach_controller", 00:04:41.138 "bdev_virtio_blk_set_hotplug", 00:04:41.138 "bdev_iscsi_delete", 00:04:41.138 "bdev_iscsi_create", 00:04:41.138 "bdev_iscsi_set_options", 00:04:41.138 "accel_error_inject_error", 00:04:41.138 "ioat_scan_accel_module", 00:04:41.138 "dsa_scan_accel_module", 00:04:41.138 "iaa_scan_accel_module", 00:04:41.138 "keyring_file_remove_key", 00:04:41.138 "keyring_file_add_key", 00:04:41.138 "keyring_linux_set_options", 00:04:41.138 "fsdev_aio_delete", 00:04:41.138 "fsdev_aio_create", 00:04:41.138 "iscsi_get_histogram", 00:04:41.138 "iscsi_enable_histogram", 00:04:41.138 "iscsi_set_options", 00:04:41.138 "iscsi_get_auth_groups", 00:04:41.138 "iscsi_auth_group_remove_secret", 00:04:41.138 "iscsi_auth_group_add_secret", 00:04:41.138 "iscsi_delete_auth_group", 00:04:41.138 "iscsi_create_auth_group", 00:04:41.138 "iscsi_set_discovery_auth", 00:04:41.138 "iscsi_get_options", 00:04:41.138 "iscsi_target_node_request_logout", 00:04:41.138 "iscsi_target_node_set_redirect", 00:04:41.138 "iscsi_target_node_set_auth", 00:04:41.138 "iscsi_target_node_add_lun", 00:04:41.138 "iscsi_get_stats", 00:04:41.138 "iscsi_get_connections", 00:04:41.138 "iscsi_portal_group_set_auth", 00:04:41.138 "iscsi_start_portal_group", 00:04:41.138 "iscsi_delete_portal_group", 00:04:41.138 "iscsi_create_portal_group", 00:04:41.138 "iscsi_get_portal_groups", 00:04:41.138 "iscsi_delete_target_node", 00:04:41.138 "iscsi_target_node_remove_pg_ig_maps", 00:04:41.138 "iscsi_target_node_add_pg_ig_maps", 00:04:41.138 "iscsi_create_target_node", 00:04:41.138 "iscsi_get_target_nodes", 00:04:41.138 "iscsi_delete_initiator_group", 00:04:41.138 "iscsi_initiator_group_remove_initiators", 00:04:41.138 "iscsi_initiator_group_add_initiators", 00:04:41.138 "iscsi_create_initiator_group", 00:04:41.138 "iscsi_get_initiator_groups", 00:04:41.138 "nvmf_set_crdt", 00:04:41.138 "nvmf_set_config", 00:04:41.138 "nvmf_set_max_subsystems", 00:04:41.138 "nvmf_stop_mdns_prr", 00:04:41.138 "nvmf_publish_mdns_prr", 00:04:41.138 "nvmf_subsystem_get_listeners", 00:04:41.138 "nvmf_subsystem_get_qpairs", 00:04:41.138 "nvmf_subsystem_get_controllers", 00:04:41.138 "nvmf_get_stats", 00:04:41.138 "nvmf_get_transports", 00:04:41.138 "nvmf_create_transport", 00:04:41.138 "nvmf_get_targets", 00:04:41.138 "nvmf_delete_target", 00:04:41.138 "nvmf_create_target", 00:04:41.138 "nvmf_subsystem_allow_any_host", 00:04:41.138 "nvmf_subsystem_set_keys", 00:04:41.138 "nvmf_subsystem_remove_host", 00:04:41.138 "nvmf_subsystem_add_host", 00:04:41.138 "nvmf_ns_remove_host", 00:04:41.138 "nvmf_ns_add_host", 00:04:41.138 "nvmf_subsystem_remove_ns", 00:04:41.138 "nvmf_subsystem_set_ns_ana_group", 00:04:41.138 "nvmf_subsystem_add_ns", 00:04:41.138 "nvmf_subsystem_listener_set_ana_state", 00:04:41.138 "nvmf_discovery_get_referrals", 00:04:41.138 "nvmf_discovery_remove_referral", 00:04:41.138 "nvmf_discovery_add_referral", 00:04:41.138 "nvmf_subsystem_remove_listener", 00:04:41.138 "nvmf_subsystem_add_listener", 00:04:41.138 "nvmf_delete_subsystem", 00:04:41.138 "nvmf_create_subsystem", 00:04:41.138 "nvmf_get_subsystems", 00:04:41.138 "env_dpdk_get_mem_stats", 00:04:41.138 "nbd_get_disks", 00:04:41.138 "nbd_stop_disk", 00:04:41.138 "nbd_start_disk", 00:04:41.138 "ublk_recover_disk", 00:04:41.138 "ublk_get_disks", 00:04:41.138 "ublk_stop_disk", 00:04:41.138 "ublk_start_disk", 00:04:41.138 "ublk_destroy_target", 00:04:41.138 "ublk_create_target", 00:04:41.138 "virtio_blk_create_transport", 00:04:41.138 "virtio_blk_get_transports", 00:04:41.138 "vhost_controller_set_coalescing", 00:04:41.138 "vhost_get_controllers", 00:04:41.138 "vhost_delete_controller", 00:04:41.138 "vhost_create_blk_controller", 00:04:41.138 "vhost_scsi_controller_remove_target", 00:04:41.138 "vhost_scsi_controller_add_target", 00:04:41.138 "vhost_start_scsi_controller", 00:04:41.138 "vhost_create_scsi_controller", 00:04:41.138 "thread_set_cpumask", 00:04:41.138 "scheduler_set_options", 00:04:41.138 "framework_get_governor", 00:04:41.138 "framework_get_scheduler", 00:04:41.138 "framework_set_scheduler", 00:04:41.138 "framework_get_reactors", 00:04:41.138 "thread_get_io_channels", 00:04:41.138 "thread_get_pollers", 00:04:41.138 "thread_get_stats", 00:04:41.138 "framework_monitor_context_switch", 00:04:41.138 "spdk_kill_instance", 00:04:41.138 "log_enable_timestamps", 00:04:41.138 "log_get_flags", 00:04:41.138 "log_clear_flag", 00:04:41.138 "log_set_flag", 00:04:41.138 "log_get_level", 00:04:41.138 "log_set_level", 00:04:41.138 "log_get_print_level", 00:04:41.138 "log_set_print_level", 00:04:41.138 "framework_enable_cpumask_locks", 00:04:41.138 "framework_disable_cpumask_locks", 00:04:41.138 "framework_wait_init", 00:04:41.138 "framework_start_init", 00:04:41.138 "scsi_get_devices", 00:04:41.138 "bdev_get_histogram", 00:04:41.138 "bdev_enable_histogram", 00:04:41.138 "bdev_set_qos_limit", 00:04:41.138 "bdev_set_qd_sampling_period", 00:04:41.138 "bdev_get_bdevs", 00:04:41.138 "bdev_reset_iostat", 00:04:41.138 "bdev_get_iostat", 00:04:41.138 "bdev_examine", 00:04:41.138 "bdev_wait_for_examine", 00:04:41.138 "bdev_set_options", 00:04:41.138 "accel_get_stats", 00:04:41.138 "accel_set_options", 00:04:41.138 "accel_set_driver", 00:04:41.138 "accel_crypto_key_destroy", 00:04:41.138 "accel_crypto_keys_get", 00:04:41.138 "accel_crypto_key_create", 00:04:41.138 "accel_assign_opc", 00:04:41.138 "accel_get_module_info", 00:04:41.138 "accel_get_opc_assignments", 00:04:41.138 "vmd_rescan", 00:04:41.138 "vmd_remove_device", 00:04:41.138 "vmd_enable", 00:04:41.138 "sock_get_default_impl", 00:04:41.138 "sock_set_default_impl", 00:04:41.138 "sock_impl_set_options", 00:04:41.138 "sock_impl_get_options", 00:04:41.138 "iobuf_get_stats", 00:04:41.138 "iobuf_set_options", 00:04:41.138 "keyring_get_keys", 00:04:41.138 "framework_get_pci_devices", 00:04:41.138 "framework_get_config", 00:04:41.138 "framework_get_subsystems", 00:04:41.138 "fsdev_set_opts", 00:04:41.138 "fsdev_get_opts", 00:04:41.138 "trace_get_info", 00:04:41.138 "trace_get_tpoint_group_mask", 00:04:41.139 "trace_disable_tpoint_group", 00:04:41.139 "trace_enable_tpoint_group", 00:04:41.139 "trace_clear_tpoint_mask", 00:04:41.139 "trace_set_tpoint_mask", 00:04:41.139 "notify_get_notifications", 00:04:41.139 "notify_get_types", 00:04:41.139 "spdk_get_version", 00:04:41.139 "rpc_get_methods" 00:04:41.139 ] 00:04:41.139 04:25:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.139 04:25:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:41.139 04:25:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58013 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58013 ']' 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58013 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58013 00:04:41.139 killing process with pid 58013 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58013' 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58013 00:04:41.139 04:25:04 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58013 00:04:42.513 ************************************ 00:04:42.513 END TEST spdkcli_tcp 00:04:42.513 ************************************ 00:04:42.513 00:04:42.513 real 0m2.490s 00:04:42.513 user 0m4.452s 00:04:42.513 sys 0m0.425s 00:04:42.513 04:25:05 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.513 04:25:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.513 04:25:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.513 04:25:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.513 04:25:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.513 04:25:05 -- common/autotest_common.sh@10 -- # set +x 00:04:42.513 ************************************ 00:04:42.513 START TEST dpdk_mem_utility 00:04:42.513 ************************************ 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.513 * Looking for test storage... 00:04:42.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.513 04:25:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.513 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.514 --rc genhtml_branch_coverage=1 00:04:42.514 --rc genhtml_function_coverage=1 00:04:42.514 --rc genhtml_legend=1 00:04:42.514 --rc geninfo_all_blocks=1 00:04:42.514 --rc geninfo_unexecuted_blocks=1 00:04:42.514 00:04:42.514 ' 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.514 --rc genhtml_branch_coverage=1 00:04:42.514 --rc genhtml_function_coverage=1 00:04:42.514 --rc genhtml_legend=1 00:04:42.514 --rc geninfo_all_blocks=1 00:04:42.514 --rc geninfo_unexecuted_blocks=1 00:04:42.514 00:04:42.514 ' 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.514 --rc genhtml_branch_coverage=1 00:04:42.514 --rc genhtml_function_coverage=1 00:04:42.514 --rc genhtml_legend=1 00:04:42.514 --rc geninfo_all_blocks=1 00:04:42.514 --rc geninfo_unexecuted_blocks=1 00:04:42.514 00:04:42.514 ' 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.514 --rc genhtml_branch_coverage=1 00:04:42.514 --rc genhtml_function_coverage=1 00:04:42.514 --rc genhtml_legend=1 00:04:42.514 --rc geninfo_all_blocks=1 00:04:42.514 --rc geninfo_unexecuted_blocks=1 00:04:42.514 00:04:42.514 ' 00:04:42.514 04:25:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.514 04:25:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58119 00:04:42.514 04:25:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58119 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58119 ']' 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.514 04:25:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.514 04:25:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.514 [2024-11-03 04:25:05.583507] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:42.514 [2024-11-03 04:25:05.583932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58119 ] 00:04:42.780 [2024-11-03 04:25:05.730198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.780 [2024-11-03 04:25:05.810490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.346 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.346 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:43.346 04:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.346 04:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.346 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.346 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.346 { 00:04:43.346 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.346 } 00:04:43.346 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.346 04:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:43.346 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:43.346 1 heaps totaling size 816.000000 MiB 00:04:43.346 size: 816.000000 MiB heap id: 0 00:04:43.346 end heaps---------- 00:04:43.346 9 mempools totaling size 595.772034 MiB 00:04:43.346 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.346 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.346 size: 92.545471 MiB name: bdev_io_58119 00:04:43.346 size: 50.003479 MiB name: msgpool_58119 00:04:43.346 size: 36.509338 MiB name: fsdev_io_58119 00:04:43.346 size: 21.763794 MiB name: PDU_Pool 00:04:43.346 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.346 size: 4.133484 MiB name: evtpool_58119 00:04:43.346 size: 0.026123 MiB name: Session_Pool 00:04:43.346 end mempools------- 00:04:43.346 6 memzones totaling size 4.142822 MiB 00:04:43.346 size: 1.000366 MiB name: RG_ring_0_58119 00:04:43.346 size: 1.000366 MiB name: RG_ring_1_58119 00:04:43.346 size: 1.000366 MiB name: RG_ring_4_58119 00:04:43.346 size: 1.000366 MiB name: RG_ring_5_58119 00:04:43.346 size: 0.125366 MiB name: RG_ring_2_58119 00:04:43.346 size: 0.015991 MiB name: RG_ring_3_58119 00:04:43.346 end memzones------- 00:04:43.346 04:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.606 heap id: 0 total size: 816.000000 MiB number of busy elements: 326 number of free elements: 18 00:04:43.606 list of free elements. size: 16.788696 MiB 00:04:43.606 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:43.606 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:43.606 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:43.606 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:43.606 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:43.606 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:43.606 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:43.606 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:43.606 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:43.606 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:43.606 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:43.606 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:04:43.606 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:43.606 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:43.606 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:43.606 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:43.606 element at address: 0x200028000000 with size: 0.390930 MiB 00:04:43.606 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:43.606 list of standard malloc elements. size: 199.290405 MiB 00:04:43.606 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:43.606 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:43.606 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:43.606 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:43.606 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:43.606 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:43.606 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:43.606 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:43.606 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:43.606 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:43.606 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:43.606 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:43.606 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:43.606 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:43.606 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:43.607 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200028064140 with size: 0.000244 MiB 00:04:43.607 element at address: 0x200028064240 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20002806af00 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:43.607 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:43.608 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:43.608 list of memzone associated elements. size: 599.920898 MiB 00:04:43.608 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:43.608 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.608 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:43.608 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.608 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:43.608 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58119_0 00:04:43.608 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:43.608 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58119_0 00:04:43.608 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:43.608 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58119_0 00:04:43.608 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:43.608 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.608 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:43.608 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.608 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:43.608 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58119_0 00:04:43.608 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:43.608 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58119 00:04:43.608 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:43.608 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58119 00:04:43.608 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:43.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.608 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:43.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.608 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:43.608 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.608 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:43.608 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.608 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:43.608 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58119 00:04:43.608 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:43.608 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58119 00:04:43.608 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:43.608 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58119 00:04:43.608 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:43.608 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58119 00:04:43.608 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:43.608 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58119 00:04:43.608 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:43.608 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58119 00:04:43.608 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:43.608 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.608 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:43.608 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.608 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:43.608 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.608 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:43.608 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58119 00:04:43.608 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:43.608 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58119 00:04:43.608 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:43.608 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.608 element at address: 0x200028064340 with size: 0.023804 MiB 00:04:43.608 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.608 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:43.608 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58119 00:04:43.608 element at address: 0x20002806a4c0 with size: 0.002502 MiB 00:04:43.608 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.608 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:43.608 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58119 00:04:43.608 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:43.608 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58119 00:04:43.608 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:43.608 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58119 00:04:43.608 element at address: 0x20002806b000 with size: 0.000366 MiB 00:04:43.608 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.608 04:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.608 04:25:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58119 00:04:43.608 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58119 ']' 00:04:43.608 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58119 00:04:43.608 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:43.608 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.609 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58119 00:04:43.609 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.609 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.609 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58119' 00:04:43.609 killing process with pid 58119 00:04:43.609 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58119 00:04:43.609 04:25:06 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58119 00:04:44.985 00:04:44.985 real 0m2.292s 00:04:44.985 user 0m2.266s 00:04:44.985 sys 0m0.374s 00:04:44.985 04:25:07 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.985 04:25:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.985 ************************************ 00:04:44.985 END TEST dpdk_mem_utility 00:04:44.985 ************************************ 00:04:44.985 04:25:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.985 04:25:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.985 04:25:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.985 04:25:07 -- common/autotest_common.sh@10 -- # set +x 00:04:44.985 ************************************ 00:04:44.985 START TEST event 00:04:44.985 ************************************ 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.985 * Looking for test storage... 00:04:44.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.985 04:25:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.985 04:25:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.985 04:25:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.985 04:25:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.985 04:25:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.985 04:25:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.985 04:25:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.985 04:25:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.985 04:25:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.985 04:25:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.985 04:25:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.985 04:25:07 event -- scripts/common.sh@344 -- # case "$op" in 00:04:44.985 04:25:07 event -- scripts/common.sh@345 -- # : 1 00:04:44.985 04:25:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.985 04:25:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.985 04:25:07 event -- scripts/common.sh@365 -- # decimal 1 00:04:44.985 04:25:07 event -- scripts/common.sh@353 -- # local d=1 00:04:44.985 04:25:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.985 04:25:07 event -- scripts/common.sh@355 -- # echo 1 00:04:44.985 04:25:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.985 04:25:07 event -- scripts/common.sh@366 -- # decimal 2 00:04:44.985 04:25:07 event -- scripts/common.sh@353 -- # local d=2 00:04:44.985 04:25:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.985 04:25:07 event -- scripts/common.sh@355 -- # echo 2 00:04:44.985 04:25:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.985 04:25:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.985 04:25:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.985 04:25:07 event -- scripts/common.sh@368 -- # return 0 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.985 --rc genhtml_branch_coverage=1 00:04:44.985 --rc genhtml_function_coverage=1 00:04:44.985 --rc genhtml_legend=1 00:04:44.985 --rc geninfo_all_blocks=1 00:04:44.985 --rc geninfo_unexecuted_blocks=1 00:04:44.985 00:04:44.985 ' 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.985 --rc genhtml_branch_coverage=1 00:04:44.985 --rc genhtml_function_coverage=1 00:04:44.985 --rc genhtml_legend=1 00:04:44.985 --rc geninfo_all_blocks=1 00:04:44.985 --rc geninfo_unexecuted_blocks=1 00:04:44.985 00:04:44.985 ' 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.985 --rc genhtml_branch_coverage=1 00:04:44.985 --rc genhtml_function_coverage=1 00:04:44.985 --rc genhtml_legend=1 00:04:44.985 --rc geninfo_all_blocks=1 00:04:44.985 --rc geninfo_unexecuted_blocks=1 00:04:44.985 00:04:44.985 ' 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.985 --rc genhtml_branch_coverage=1 00:04:44.985 --rc genhtml_function_coverage=1 00:04:44.985 --rc genhtml_legend=1 00:04:44.985 --rc geninfo_all_blocks=1 00:04:44.985 --rc geninfo_unexecuted_blocks=1 00:04:44.985 00:04:44.985 ' 00:04:44.985 04:25:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:44.985 04:25:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:44.985 04:25:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:44.985 04:25:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.985 04:25:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.985 ************************************ 00:04:44.985 START TEST event_perf 00:04:44.985 ************************************ 00:04:44.985 04:25:07 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.985 Running I/O for 1 seconds...[2024-11-03 04:25:07.917798] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:44.986 [2024-11-03 04:25:07.917901] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:04:45.247 [2024-11-03 04:25:08.078941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.247 [2024-11-03 04:25:08.179099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.247 [2024-11-03 04:25:08.179358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.247 [2024-11-03 04:25:08.179682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.247 [2024-11-03 04:25:08.179811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.632 Running I/O for 1 seconds... 00:04:46.632 lcore 0: 158759 00:04:46.632 lcore 1: 158755 00:04:46.632 lcore 2: 158758 00:04:46.632 lcore 3: 158760 00:04:46.632 done. 00:04:46.632 00:04:46.632 real 0m1.463s 00:04:46.632 user 0m4.261s 00:04:46.632 sys 0m0.079s 00:04:46.632 ************************************ 00:04:46.632 END TEST event_perf 00:04:46.632 ************************************ 00:04:46.632 04:25:09 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.632 04:25:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.632 04:25:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.632 04:25:09 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:46.632 04:25:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.632 04:25:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.632 ************************************ 00:04:46.632 START TEST event_reactor 00:04:46.632 ************************************ 00:04:46.632 04:25:09 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.632 [2024-11-03 04:25:09.436383] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:46.632 [2024-11-03 04:25:09.436617] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:04:46.632 [2024-11-03 04:25:09.597551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.632 [2024-11-03 04:25:09.693345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.017 test_start 00:04:48.018 oneshot 00:04:48.018 tick 100 00:04:48.018 tick 100 00:04:48.018 tick 250 00:04:48.018 tick 100 00:04:48.018 tick 100 00:04:48.018 tick 100 00:04:48.018 tick 250 00:04:48.018 tick 500 00:04:48.018 tick 100 00:04:48.018 tick 100 00:04:48.018 tick 250 00:04:48.018 tick 100 00:04:48.018 tick 100 00:04:48.018 test_end 00:04:48.018 00:04:48.018 real 0m1.436s 00:04:48.018 user 0m1.273s 00:04:48.018 sys 0m0.056s 00:04:48.018 04:25:10 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.018 ************************************ 00:04:48.018 END TEST event_reactor 00:04:48.018 ************************************ 00:04:48.018 04:25:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:48.018 04:25:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.018 04:25:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:48.018 04:25:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.018 04:25:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.018 ************************************ 00:04:48.018 START TEST event_reactor_perf 00:04:48.018 ************************************ 00:04:48.018 04:25:10 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.018 [2024-11-03 04:25:10.935352] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:48.018 [2024-11-03 04:25:10.935460] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58281 ] 00:04:48.018 [2024-11-03 04:25:11.096079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.278 [2024-11-03 04:25:11.200937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.664 test_start 00:04:49.664 test_end 00:04:49.664 Performance: 314223 events per second 00:04:49.664 00:04:49.664 real 0m1.449s 00:04:49.664 user 0m1.266s 00:04:49.664 sys 0m0.075s 00:04:49.664 ************************************ 00:04:49.664 END TEST event_reactor_perf 00:04:49.664 ************************************ 00:04:49.664 04:25:12 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.664 04:25:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.664 04:25:12 event -- event/event.sh@49 -- # uname -s 00:04:49.664 04:25:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.664 04:25:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:49.664 04:25:12 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.664 04:25:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.664 04:25:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.664 ************************************ 00:04:49.664 START TEST event_scheduler 00:04:49.664 ************************************ 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:49.664 * Looking for test storage... 00:04:49.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:49.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.664 04:25:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.664 --rc genhtml_branch_coverage=1 00:04:49.664 --rc genhtml_function_coverage=1 00:04:49.664 --rc genhtml_legend=1 00:04:49.664 --rc geninfo_all_blocks=1 00:04:49.664 --rc geninfo_unexecuted_blocks=1 00:04:49.664 00:04:49.664 ' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.664 --rc genhtml_branch_coverage=1 00:04:49.664 --rc genhtml_function_coverage=1 00:04:49.664 --rc genhtml_legend=1 00:04:49.664 --rc geninfo_all_blocks=1 00:04:49.664 --rc geninfo_unexecuted_blocks=1 00:04:49.664 00:04:49.664 ' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.664 --rc genhtml_branch_coverage=1 00:04:49.664 --rc genhtml_function_coverage=1 00:04:49.664 --rc genhtml_legend=1 00:04:49.664 --rc geninfo_all_blocks=1 00:04:49.664 --rc geninfo_unexecuted_blocks=1 00:04:49.664 00:04:49.664 ' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.664 --rc genhtml_branch_coverage=1 00:04:49.664 --rc genhtml_function_coverage=1 00:04:49.664 --rc genhtml_legend=1 00:04:49.664 --rc geninfo_all_blocks=1 00:04:49.664 --rc geninfo_unexecuted_blocks=1 00:04:49.664 00:04:49.664 ' 00:04:49.664 04:25:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.664 04:25:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58357 00:04:49.664 04:25:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.664 04:25:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58357 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58357 ']' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.664 04:25:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.664 04:25:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.664 [2024-11-03 04:25:12.621952] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:49.664 [2024-11-03 04:25:12.622248] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58357 ] 00:04:49.925 [2024-11-03 04:25:12.782213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.925 [2024-11-03 04:25:12.885881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.925 [2024-11-03 04:25:12.886199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.925 [2024-11-03 04:25:12.886489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.925 [2024-11-03 04:25:12.886634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:50.494 04:25:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.494 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.494 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.494 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.494 POWER: Cannot set governor of lcore 0 to performance 00:04:50.494 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.494 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.494 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.494 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.494 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:50.494 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:50.494 POWER: Unable to set Power Management Environment for lcore 0 00:04:50.494 [2024-11-03 04:25:13.465405] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:50.494 [2024-11-03 04:25:13.465476] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:50.494 [2024-11-03 04:25:13.465901] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:50.494 [2024-11-03 04:25:13.465935] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:50.494 [2024-11-03 04:25:13.465945] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:50.494 [2024-11-03 04:25:13.465953] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.494 04:25:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.494 04:25:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.754 [2024-11-03 04:25:13.686490] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:50.754 04:25:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.754 04:25:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:50.754 04:25:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.755 04:25:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 ************************************ 00:04:50.755 START TEST scheduler_create_thread 00:04:50.755 ************************************ 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 2 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 3 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 4 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 5 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 6 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 7 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 8 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 9 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 10 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.755 04:25:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.660 04:25:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.660 04:25:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.660 04:25:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.660 04:25:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.660 04:25:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.594 ************************************ 00:04:53.594 END TEST scheduler_create_thread 00:04:53.594 ************************************ 00:04:53.594 04:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.594 00:04:53.594 real 0m2.612s 00:04:53.594 user 0m0.014s 00:04:53.594 sys 0m0.006s 00:04:53.594 04:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.594 04:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.594 04:25:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.594 04:25:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58357 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58357 ']' 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58357 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58357 00:04:53.594 killing process with pid 58357 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58357' 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58357 00:04:53.594 04:25:16 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58357 00:04:53.852 [2024-11-03 04:25:16.796880] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.420 00:04:54.420 real 0m4.962s 00:04:54.420 user 0m8.745s 00:04:54.420 sys 0m0.309s 00:04:54.420 04:25:17 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.420 ************************************ 00:04:54.420 END TEST event_scheduler 00:04:54.420 ************************************ 00:04:54.420 04:25:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.420 04:25:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.420 04:25:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.420 04:25:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.420 04:25:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.420 04:25:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.420 ************************************ 00:04:54.420 START TEST app_repeat 00:04:54.420 ************************************ 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:54.420 Process app_repeat pid: 58457 00:04:54.420 spdk_app_start Round 0 00:04:54.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58457 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58457' 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:54.420 04:25:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58457 /var/tmp/spdk-nbd.sock 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58457 ']' 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.420 04:25:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.420 [2024-11-03 04:25:17.480944] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:54.420 [2024-11-03 04:25:17.481172] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58457 ] 00:04:54.680 [2024-11-03 04:25:17.637388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.680 [2024-11-03 04:25:17.718369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.680 [2024-11-03 04:25:17.718455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.270 04:25:18 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.270 04:25:18 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:55.270 04:25:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.527 Malloc0 00:04:55.527 04:25:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.789 Malloc1 00:04:56.047 04:25:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.047 04:25:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.047 /dev/nbd0 00:04:56.047 04:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.047 04:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.047 04:25:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:56.047 04:25:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:56.047 04:25:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.048 1+0 records in 00:04:56.048 1+0 records out 00:04:56.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241102 s, 17.0 MB/s 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:56.048 04:25:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:56.048 04:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.048 04:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.048 04:25:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.306 /dev/nbd1 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.306 1+0 records in 00:04:56.306 1+0 records out 00:04:56.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024717 s, 16.6 MB/s 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:56.306 04:25:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.306 04:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.564 04:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.564 { 00:04:56.564 "nbd_device": "/dev/nbd0", 00:04:56.564 "bdev_name": "Malloc0" 00:04:56.564 }, 00:04:56.564 { 00:04:56.564 "nbd_device": "/dev/nbd1", 00:04:56.564 "bdev_name": "Malloc1" 00:04:56.564 } 00:04:56.564 ]' 00:04:56.564 04:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.564 04:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.564 { 00:04:56.564 "nbd_device": "/dev/nbd0", 00:04:56.564 "bdev_name": "Malloc0" 00:04:56.564 }, 00:04:56.565 { 00:04:56.565 "nbd_device": "/dev/nbd1", 00:04:56.565 "bdev_name": "Malloc1" 00:04:56.565 } 00:04:56.565 ]' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.565 /dev/nbd1' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.565 /dev/nbd1' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.565 256+0 records in 00:04:56.565 256+0 records out 00:04:56.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663698 s, 158 MB/s 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.565 256+0 records in 00:04:56.565 256+0 records out 00:04:56.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214219 s, 48.9 MB/s 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.565 256+0 records in 00:04:56.565 256+0 records out 00:04:56.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173204 s, 60.5 MB/s 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.565 04:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.822 04:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.079 04:25:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.338 04:25:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.338 04:25:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.596 04:25:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.161 [2024-11-03 04:25:21.139960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.161 [2024-11-03 04:25:21.212111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.161 [2024-11-03 04:25:21.212219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.420 [2024-11-03 04:25:21.315667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.420 [2024-11-03 04:25:21.315735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.963 04:25:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.963 spdk_app_start Round 1 00:05:00.963 04:25:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.963 04:25:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58457 /var/tmp/spdk-nbd.sock 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58457 ']' 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.963 04:25:23 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:00.963 04:25:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.963 Malloc0 00:05:00.963 04:25:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.221 Malloc1 00:05:01.221 04:25:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.221 04:25:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.480 /dev/nbd0 00:05:01.480 04:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.480 04:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.480 1+0 records in 00:05:01.480 1+0 records out 00:05:01.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429367 s, 9.5 MB/s 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:01.480 04:25:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:01.480 04:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.480 04:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.480 04:25:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.738 /dev/nbd1 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.738 1+0 records in 00:05:01.738 1+0 records out 00:05:01.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185197 s, 22.1 MB/s 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:01.738 04:25:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.738 04:25:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.997 04:25:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.997 { 00:05:01.997 "nbd_device": "/dev/nbd0", 00:05:01.997 "bdev_name": "Malloc0" 00:05:01.997 }, 00:05:01.997 { 00:05:01.997 "nbd_device": "/dev/nbd1", 00:05:01.997 "bdev_name": "Malloc1" 00:05:01.997 } 00:05:01.997 ]' 00:05:01.997 04:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.997 04:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.997 { 00:05:01.997 "nbd_device": "/dev/nbd0", 00:05:01.997 "bdev_name": "Malloc0" 00:05:01.997 }, 00:05:01.997 { 00:05:01.997 "nbd_device": "/dev/nbd1", 00:05:01.997 "bdev_name": "Malloc1" 00:05:01.997 } 00:05:01.997 ]' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.997 /dev/nbd1' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.997 /dev/nbd1' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.997 256+0 records in 00:05:01.997 256+0 records out 00:05:01.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530701 s, 198 MB/s 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.997 256+0 records in 00:05:01.997 256+0 records out 00:05:01.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146824 s, 71.4 MB/s 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.997 256+0 records in 00:05:01.997 256+0 records out 00:05:01.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209787 s, 50.0 MB/s 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.997 04:25:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.254 04:25:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.254 04:25:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.254 04:25:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.254 04:25:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.255 04:25:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.513 04:25:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.772 04:25:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.772 04:25:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.030 04:25:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.596 [2024-11-03 04:25:26.574060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.596 [2024-11-03 04:25:26.647364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.596 [2024-11-03 04:25:26.647467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.855 [2024-11-03 04:25:26.745529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:03.855 [2024-11-03 04:25:26.745579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.382 spdk_app_start Round 2 00:05:06.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.382 04:25:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.382 04:25:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:06.382 04:25:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58457 /var/tmp/spdk-nbd.sock 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58457 ']' 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.382 04:25:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:06.382 04:25:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.382 Malloc0 00:05:06.382 04:25:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.642 Malloc1 00:05:06.642 04:25:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.642 04:25:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.642 04:25:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.642 04:25:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.642 04:25:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.643 04:25:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.900 /dev/nbd0 00:05:06.900 04:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.900 04:25:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.900 04:25:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:06.900 04:25:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:06.900 04:25:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.901 1+0 records in 00:05:06.901 1+0 records out 00:05:06.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183497 s, 22.3 MB/s 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:06.901 04:25:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:06.901 04:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.901 04:25:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.901 04:25:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.159 /dev/nbd1 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.159 1+0 records in 00:05:07.159 1+0 records out 00:05:07.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269481 s, 15.2 MB/s 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:07.159 04:25:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.159 04:25:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.416 04:25:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.416 { 00:05:07.416 "nbd_device": "/dev/nbd0", 00:05:07.416 "bdev_name": "Malloc0" 00:05:07.416 }, 00:05:07.416 { 00:05:07.416 "nbd_device": "/dev/nbd1", 00:05:07.417 "bdev_name": "Malloc1" 00:05:07.417 } 00:05:07.417 ]' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.417 { 00:05:07.417 "nbd_device": "/dev/nbd0", 00:05:07.417 "bdev_name": "Malloc0" 00:05:07.417 }, 00:05:07.417 { 00:05:07.417 "nbd_device": "/dev/nbd1", 00:05:07.417 "bdev_name": "Malloc1" 00:05:07.417 } 00:05:07.417 ]' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.417 /dev/nbd1' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.417 /dev/nbd1' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.417 256+0 records in 00:05:07.417 256+0 records out 00:05:07.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00744922 s, 141 MB/s 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.417 256+0 records in 00:05:07.417 256+0 records out 00:05:07.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173797 s, 60.3 MB/s 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.417 256+0 records in 00:05:07.417 256+0 records out 00:05:07.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230698 s, 45.5 MB/s 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.417 04:25:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.675 04:25:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.933 04:25:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.191 04:25:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.191 04:25:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.450 04:25:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.016 [2024-11-03 04:25:31.963462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.016 [2024-11-03 04:25:32.033221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.016 [2024-11-03 04:25:32.033227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.273 [2024-11-03 04:25:32.130298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.273 [2024-11-03 04:25:32.130353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.802 04:25:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58457 /var/tmp/spdk-nbd.sock 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58457 ']' 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:11.802 04:25:34 event.app_repeat -- event/event.sh@39 -- # killprocess 58457 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58457 ']' 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58457 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58457 00:05:11.802 killing process with pid 58457 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58457' 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58457 00:05:11.802 04:25:34 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58457 00:05:12.368 spdk_app_start is called in Round 0. 00:05:12.368 Shutdown signal received, stop current app iteration 00:05:12.368 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 reinitialization... 00:05:12.368 spdk_app_start is called in Round 1. 00:05:12.368 Shutdown signal received, stop current app iteration 00:05:12.368 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 reinitialization... 00:05:12.368 spdk_app_start is called in Round 2. 00:05:12.368 Shutdown signal received, stop current app iteration 00:05:12.368 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 reinitialization... 00:05:12.368 spdk_app_start is called in Round 3. 00:05:12.368 Shutdown signal received, stop current app iteration 00:05:12.368 04:25:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:12.368 04:25:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:12.368 00:05:12.368 real 0m17.928s 00:05:12.368 user 0m39.387s 00:05:12.368 sys 0m2.021s 00:05:12.368 04:25:35 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.368 04:25:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.368 ************************************ 00:05:12.368 END TEST app_repeat 00:05:12.368 ************************************ 00:05:12.368 04:25:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:12.368 04:25:35 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:12.368 04:25:35 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.368 04:25:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.368 04:25:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.368 ************************************ 00:05:12.368 START TEST cpu_locks 00:05:12.368 ************************************ 00:05:12.368 04:25:35 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:12.628 * Looking for test storage... 00:05:12.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:12.628 04:25:35 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.628 04:25:35 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.628 04:25:35 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.628 04:25:35 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.628 04:25:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:12.628 04:25:35 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.628 04:25:35 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.628 --rc genhtml_branch_coverage=1 00:05:12.628 --rc genhtml_function_coverage=1 00:05:12.628 --rc genhtml_legend=1 00:05:12.628 --rc geninfo_all_blocks=1 00:05:12.628 --rc geninfo_unexecuted_blocks=1 00:05:12.628 00:05:12.628 ' 00:05:12.629 04:25:35 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.629 --rc genhtml_branch_coverage=1 00:05:12.629 --rc genhtml_function_coverage=1 00:05:12.629 --rc genhtml_legend=1 00:05:12.629 --rc geninfo_all_blocks=1 00:05:12.629 --rc geninfo_unexecuted_blocks=1 00:05:12.629 00:05:12.629 ' 00:05:12.629 04:25:35 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.629 --rc genhtml_branch_coverage=1 00:05:12.629 --rc genhtml_function_coverage=1 00:05:12.629 --rc genhtml_legend=1 00:05:12.629 --rc geninfo_all_blocks=1 00:05:12.629 --rc geninfo_unexecuted_blocks=1 00:05:12.629 00:05:12.629 ' 00:05:12.629 04:25:35 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.629 --rc genhtml_branch_coverage=1 00:05:12.629 --rc genhtml_function_coverage=1 00:05:12.629 --rc genhtml_legend=1 00:05:12.629 --rc geninfo_all_blocks=1 00:05:12.629 --rc geninfo_unexecuted_blocks=1 00:05:12.629 00:05:12.629 ' 00:05:12.629 04:25:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:12.629 04:25:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:12.629 04:25:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:12.629 04:25:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:12.629 04:25:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.629 04:25:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.629 04:25:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.629 ************************************ 00:05:12.629 START TEST default_locks 00:05:12.629 ************************************ 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58889 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58889 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58889 ']' 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.629 04:25:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.629 [2024-11-03 04:25:35.639490] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:12.629 [2024-11-03 04:25:35.639775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58889 ] 00:05:12.894 [2024-11-03 04:25:35.802513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.894 [2024-11-03 04:25:35.898339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.461 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.461 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:13.461 04:25:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58889 00:05:13.461 04:25:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58889 00:05:13.461 04:25:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58889 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58889 ']' 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58889 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58889 00:05:13.719 killing process with pid 58889 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58889' 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58889 00:05:13.719 04:25:36 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58889 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58889 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58889 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.097 ERROR: process (pid: 58889) is no longer running 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58889 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58889 ']' 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58889) - No such process 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.097 00:05:15.097 real 0m2.401s 00:05:15.097 user 0m2.425s 00:05:15.097 sys 0m0.412s 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.097 ************************************ 00:05:15.097 END TEST default_locks 00:05:15.097 ************************************ 00:05:15.097 04:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 04:25:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:15.097 04:25:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.097 04:25:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.097 04:25:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 ************************************ 00:05:15.097 START TEST default_locks_via_rpc 00:05:15.097 ************************************ 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58953 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58953 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58953 ']' 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.097 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.097 [2024-11-03 04:25:38.084416] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:15.097 [2024-11-03 04:25:38.084510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:05:15.356 [2024-11-03 04:25:38.234400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.356 [2024-11-03 04:25:38.317755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.921 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58953 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58953 00:05:15.922 04:25:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.179 04:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58953 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58953 ']' 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58953 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58953 00:05:16.180 killing process with pid 58953 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58953' 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58953 00:05:16.180 04:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58953 00:05:17.560 ************************************ 00:05:17.560 END TEST default_locks_via_rpc 00:05:17.560 ************************************ 00:05:17.560 00:05:17.560 real 0m2.308s 00:05:17.560 user 0m2.345s 00:05:17.560 sys 0m0.416s 00:05:17.560 04:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.560 04:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.560 04:25:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:17.560 04:25:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.560 04:25:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.560 04:25:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.560 ************************************ 00:05:17.560 START TEST non_locking_app_on_locked_coremask 00:05:17.561 ************************************ 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:17.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59005 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59005 /var/tmp/spdk.sock 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59005 ']' 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.561 04:25:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.561 [2024-11-03 04:25:40.446906] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:17.561 [2024-11-03 04:25:40.447198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59005 ] 00:05:17.561 [2024-11-03 04:25:40.601434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.819 [2024-11-03 04:25:40.677876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59021 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59021 /var/tmp/spdk2.sock 00:05:18.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59021 ']' 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.385 04:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.385 [2024-11-03 04:25:41.347751] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:18.385 [2024-11-03 04:25:41.348014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 00:05:18.643 [2024-11-03 04:25:41.513282] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.643 [2024-11-03 04:25:41.513342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.643 [2024-11-03 04:25:41.675162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.577 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.577 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:19.577 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59005 00:05:19.577 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59005 00:05:19.577 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59005 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59005 ']' 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59005 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59005 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.835 killing process with pid 59005 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59005' 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59005 00:05:19.835 04:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59005 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59021 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59021 ']' 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59021 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59021 00:05:22.364 killing process with pid 59021 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59021' 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59021 00:05:22.364 04:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59021 00:05:23.809 00:05:23.809 real 0m6.034s 00:05:23.809 user 0m6.299s 00:05:23.809 sys 0m0.783s 00:05:23.809 04:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.809 04:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.809 ************************************ 00:05:23.809 END TEST non_locking_app_on_locked_coremask 00:05:23.809 ************************************ 00:05:23.809 04:25:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.809 04:25:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.809 04:25:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.809 04:25:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.809 ************************************ 00:05:23.809 START TEST locking_app_on_unlocked_coremask 00:05:23.809 ************************************ 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59112 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59112 /var/tmp/spdk.sock 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59112 ']' 00:05:23.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.809 04:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.809 [2024-11-03 04:25:46.531987] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:23.809 [2024-11-03 04:25:46.532107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:05:23.809 [2024-11-03 04:25:46.692550] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.809 [2024-11-03 04:25:46.692597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.809 [2024-11-03 04:25:46.769057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59128 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59128 /var/tmp/spdk2.sock 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59128 ']' 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.374 04:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.374 [2024-11-03 04:25:47.443089] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:24.374 [2024-11-03 04:25:47.443666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59128 ] 00:05:24.632 [2024-11-03 04:25:47.605069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.889 [2024-11-03 04:25:47.770872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.820 04:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.820 04:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:25.820 04:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59128 00:05:25.820 04:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.820 04:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59128 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59112 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59112 ']' 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59112 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59112 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.077 killing process with pid 59112 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59112' 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59112 00:05:26.077 04:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59112 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59128 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59128 ']' 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59128 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59128 00:05:28.601 killing process with pid 59128 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59128' 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59128 00:05:28.601 04:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59128 00:05:29.567 ************************************ 00:05:29.567 END TEST locking_app_on_unlocked_coremask 00:05:29.567 ************************************ 00:05:29.567 00:05:29.567 real 0m6.151s 00:05:29.567 user 0m6.448s 00:05:29.567 sys 0m0.817s 00:05:29.567 04:25:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.567 04:25:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.567 04:25:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.567 04:25:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.567 04:25:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.567 04:25:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.826 ************************************ 00:05:29.826 START TEST locking_app_on_locked_coremask 00:05:29.826 ************************************ 00:05:29.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59219 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59219 /var/tmp/spdk.sock 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59219 ']' 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.826 04:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.826 [2024-11-03 04:25:52.717613] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:29.826 [2024-11-03 04:25:52.717707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59219 ] 00:05:29.826 [2024-11-03 04:25:52.873534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.083 [2024-11-03 04:25:52.968151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59235 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59235 /var/tmp/spdk2.sock 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59235 /var/tmp/spdk2.sock 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59235 /var/tmp/spdk2.sock 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59235 ']' 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.650 04:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.650 [2024-11-03 04:25:53.632143] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:30.650 [2024-11-03 04:25:53.632439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59235 ] 00:05:30.911 [2024-11-03 04:25:53.805840] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59219 has claimed it. 00:05:30.911 [2024-11-03 04:25:53.805895] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.482 ERROR: process (pid: 59235) is no longer running 00:05:31.482 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59235) - No such process 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59219 ']' 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.482 killing process with pid 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59219' 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59219 00:05:31.482 04:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59219 00:05:32.861 00:05:32.861 real 0m3.065s 00:05:32.861 user 0m3.286s 00:05:32.861 sys 0m0.505s 00:05:32.861 04:25:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.861 ************************************ 00:05:32.861 END TEST locking_app_on_locked_coremask 00:05:32.861 ************************************ 00:05:32.861 04:25:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.861 04:25:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:32.861 04:25:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.861 04:25:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.861 04:25:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.861 ************************************ 00:05:32.861 START TEST locking_overlapped_coremask 00:05:32.861 ************************************ 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59288 00:05:32.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59288 /var/tmp/spdk.sock 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59288 ']' 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.861 04:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.861 [2024-11-03 04:25:55.833295] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:32.861 [2024-11-03 04:25:55.833412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:05:33.119 [2024-11-03 04:25:55.987451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.119 [2024-11-03 04:25:56.064411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.119 [2024-11-03 04:25:56.064478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.119 [2024-11-03 04:25:56.064507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59306 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59306 /var/tmp/spdk2.sock 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59306 /var/tmp/spdk2.sock 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59306 /var/tmp/spdk2.sock 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59306 ']' 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.685 04:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.685 [2024-11-03 04:25:56.733323] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:33.685 [2024-11-03 04:25:56.733599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:05:33.943 [2024-11-03 04:25:56.909595] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59288 has claimed it. 00:05:33.943 [2024-11-03 04:25:56.909663] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.535 ERROR: process (pid: 59306) is no longer running 00:05:34.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59306) - No such process 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59288 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59288 ']' 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59288 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59288 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.535 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59288' 00:05:34.535 killing process with pid 59288 00:05:34.536 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59288 00:05:34.536 04:25:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59288 00:05:35.909 00:05:35.909 real 0m2.808s 00:05:35.909 user 0m7.665s 00:05:35.909 sys 0m0.407s 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.909 ************************************ 00:05:35.909 END TEST locking_overlapped_coremask 00:05:35.909 ************************************ 00:05:35.909 04:25:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:35.909 04:25:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.909 04:25:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.909 04:25:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.909 ************************************ 00:05:35.909 START TEST locking_overlapped_coremask_via_rpc 00:05:35.909 ************************************ 00:05:35.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59358 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59358 /var/tmp/spdk.sock 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59358 ']' 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.909 04:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.909 [2024-11-03 04:25:58.674874] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:35.909 [2024-11-03 04:25:58.675072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59358 ] 00:05:35.909 [2024-11-03 04:25:58.823573] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.909 [2024-11-03 04:25:58.823613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.909 [2024-11-03 04:25:58.907146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.909 [2024-11-03 04:25:58.907366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.909 [2024-11-03 04:25:58.907380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59372 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59372 /var/tmp/spdk2.sock 00:05:36.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59372 ']' 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.475 04:25:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.737 [2024-11-03 04:25:59.596390] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:36.737 [2024-11-03 04:25:59.597000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:05:36.737 [2024-11-03 04:25:59.775016] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.737 [2024-11-03 04:25:59.775086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.997 [2024-11-03 04:26:00.016488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.997 [2024-11-03 04:26:00.016556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:36.997 [2024-11-03 04:26:00.016529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.369 [2024-11-03 04:26:01.205717] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59358 has claimed it. 00:05:38.369 request: 00:05:38.369 { 00:05:38.369 "method": "framework_enable_cpumask_locks", 00:05:38.369 "req_id": 1 00:05:38.369 } 00:05:38.369 Got JSON-RPC error response 00:05:38.369 response: 00:05:38.369 { 00:05:38.369 "code": -32603, 00:05:38.369 "message": "Failed to claim CPU core: 2" 00:05:38.369 } 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59358 /var/tmp/spdk.sock 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59358 ']' 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.369 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59372 /var/tmp/spdk2.sock 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59372 ']' 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.370 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.628 ************************************ 00:05:38.628 END TEST locking_overlapped_coremask_via_rpc 00:05:38.628 ************************************ 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.628 00:05:38.628 real 0m3.013s 00:05:38.628 user 0m1.051s 00:05:38.628 sys 0m0.132s 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.628 04:26:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.628 04:26:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.628 04:26:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59358 ]] 00:05:38.628 04:26:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59358 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59358 ']' 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59358 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59358 00:05:38.628 killing process with pid 59358 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59358' 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59358 00:05:38.628 04:26:01 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59358 00:05:40.008 04:26:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59372 ]] 00:05:40.008 04:26:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59372 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59372 ']' 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59372 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59372 00:05:40.008 killing process with pid 59372 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59372' 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59372 00:05:40.008 04:26:02 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59372 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59358 ]] 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59358 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59358 ']' 00:05:41.384 Process with pid 59358 is not found 00:05:41.384 Process with pid 59372 is not found 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59358 00:05:41.384 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59358) - No such process 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59358 is not found' 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59372 ]] 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59372 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59372 ']' 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59372 00:05:41.384 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59372) - No such process 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59372 is not found' 00:05:41.384 04:26:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.384 ************************************ 00:05:41.384 END TEST cpu_locks 00:05:41.384 ************************************ 00:05:41.384 00:05:41.384 real 0m28.748s 00:05:41.384 user 0m50.265s 00:05:41.384 sys 0m4.296s 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.384 04:26:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 ************************************ 00:05:41.384 END TEST event 00:05:41.384 ************************************ 00:05:41.384 00:05:41.384 real 0m56.467s 00:05:41.384 user 1m45.359s 00:05:41.384 sys 0m7.076s 00:05:41.384 04:26:04 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.384 04:26:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 04:26:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.384 04:26:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:41.384 04:26:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.384 04:26:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 ************************************ 00:05:41.384 START TEST thread 00:05:41.384 ************************************ 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.384 * Looking for test storage... 00:05:41.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.384 04:26:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.384 04:26:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.384 04:26:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.384 04:26:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.384 04:26:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.384 04:26:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.384 04:26:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.384 04:26:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.384 04:26:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.384 04:26:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.384 04:26:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.384 04:26:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.384 04:26:04 thread -- scripts/common.sh@345 -- # : 1 00:05:41.384 04:26:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.384 04:26:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.384 04:26:04 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.384 04:26:04 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.384 04:26:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.384 04:26:04 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.384 04:26:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.384 04:26:04 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.384 04:26:04 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.384 04:26:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.384 04:26:04 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.384 04:26:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.384 04:26:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.384 04:26:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.384 04:26:04 thread -- scripts/common.sh@368 -- # return 0 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.384 --rc genhtml_branch_coverage=1 00:05:41.384 --rc genhtml_function_coverage=1 00:05:41.384 --rc genhtml_legend=1 00:05:41.384 --rc geninfo_all_blocks=1 00:05:41.384 --rc geninfo_unexecuted_blocks=1 00:05:41.384 00:05:41.384 ' 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.384 --rc genhtml_branch_coverage=1 00:05:41.384 --rc genhtml_function_coverage=1 00:05:41.384 --rc genhtml_legend=1 00:05:41.384 --rc geninfo_all_blocks=1 00:05:41.384 --rc geninfo_unexecuted_blocks=1 00:05:41.384 00:05:41.384 ' 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.384 --rc genhtml_branch_coverage=1 00:05:41.384 --rc genhtml_function_coverage=1 00:05:41.384 --rc genhtml_legend=1 00:05:41.384 --rc geninfo_all_blocks=1 00:05:41.384 --rc geninfo_unexecuted_blocks=1 00:05:41.384 00:05:41.384 ' 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.384 --rc genhtml_branch_coverage=1 00:05:41.384 --rc genhtml_function_coverage=1 00:05:41.384 --rc genhtml_legend=1 00:05:41.384 --rc geninfo_all_blocks=1 00:05:41.384 --rc geninfo_unexecuted_blocks=1 00:05:41.384 00:05:41.384 ' 00:05:41.384 04:26:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.384 04:26:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 ************************************ 00:05:41.384 START TEST thread_poller_perf 00:05:41.384 ************************************ 00:05:41.384 04:26:04 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.384 [2024-11-03 04:26:04.416044] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:41.384 [2024-11-03 04:26:04.416658] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:05:41.644 [2024-11-03 04:26:04.577110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.644 [2024-11-03 04:26:04.673989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.644 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:43.018 [2024-11-03T04:26:06.102Z] ====================================== 00:05:43.018 [2024-11-03T04:26:06.102Z] busy:2614338932 (cyc) 00:05:43.018 [2024-11-03T04:26:06.102Z] total_run_count: 307000 00:05:43.018 [2024-11-03T04:26:06.102Z] tsc_hz: 2600000000 (cyc) 00:05:43.018 [2024-11-03T04:26:06.102Z] ====================================== 00:05:43.018 [2024-11-03T04:26:06.102Z] poller_cost: 8515 (cyc), 3275 (nsec) 00:05:43.018 00:05:43.018 real 0m1.451s 00:05:43.018 user 0m1.282s 00:05:43.018 sys 0m0.062s 00:05:43.018 04:26:05 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.018 04:26:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.018 ************************************ 00:05:43.018 END TEST thread_poller_perf 00:05:43.018 ************************************ 00:05:43.018 04:26:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.018 04:26:05 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:43.018 04:26:05 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.018 04:26:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.018 ************************************ 00:05:43.018 START TEST thread_poller_perf 00:05:43.018 ************************************ 00:05:43.018 04:26:05 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.018 [2024-11-03 04:26:05.921953] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:43.018 [2024-11-03 04:26:05.922064] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59568 ] 00:05:43.018 [2024-11-03 04:26:06.082752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.276 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.276 [2024-11-03 04:26:06.177386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.654 [2024-11-03T04:26:07.738Z] ====================================== 00:05:44.654 [2024-11-03T04:26:07.738Z] busy:2603222228 (cyc) 00:05:44.654 [2024-11-03T04:26:07.738Z] total_run_count: 3940000 00:05:44.654 [2024-11-03T04:26:07.738Z] tsc_hz: 2600000000 (cyc) 00:05:44.654 [2024-11-03T04:26:07.738Z] ====================================== 00:05:44.654 [2024-11-03T04:26:07.738Z] poller_cost: 660 (cyc), 253 (nsec) 00:05:44.654 ************************************ 00:05:44.654 END TEST thread_poller_perf 00:05:44.654 ************************************ 00:05:44.654 00:05:44.654 real 0m1.449s 00:05:44.654 user 0m1.277s 00:05:44.654 sys 0m0.065s 00:05:44.654 04:26:07 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.654 04:26:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.654 04:26:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.654 ************************************ 00:05:44.654 END TEST thread 00:05:44.654 ************************************ 00:05:44.654 00:05:44.654 real 0m3.154s 00:05:44.654 user 0m2.670s 00:05:44.654 sys 0m0.241s 00:05:44.654 04:26:07 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.654 04:26:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.654 04:26:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.654 04:26:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:44.654 04:26:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.654 04:26:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.654 04:26:07 -- common/autotest_common.sh@10 -- # set +x 00:05:44.654 ************************************ 00:05:44.654 START TEST app_cmdline 00:05:44.654 ************************************ 00:05:44.654 04:26:07 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:44.654 * Looking for test storage... 00:05:44.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:44.654 04:26:07 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.654 04:26:07 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.654 04:26:07 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.654 04:26:07 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.654 04:26:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.655 04:26:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.655 --rc genhtml_branch_coverage=1 00:05:44.655 --rc genhtml_function_coverage=1 00:05:44.655 --rc genhtml_legend=1 00:05:44.655 --rc geninfo_all_blocks=1 00:05:44.655 --rc geninfo_unexecuted_blocks=1 00:05:44.655 00:05:44.655 ' 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.655 --rc genhtml_branch_coverage=1 00:05:44.655 --rc genhtml_function_coverage=1 00:05:44.655 --rc genhtml_legend=1 00:05:44.655 --rc geninfo_all_blocks=1 00:05:44.655 --rc geninfo_unexecuted_blocks=1 00:05:44.655 00:05:44.655 ' 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.655 --rc genhtml_branch_coverage=1 00:05:44.655 --rc genhtml_function_coverage=1 00:05:44.655 --rc genhtml_legend=1 00:05:44.655 --rc geninfo_all_blocks=1 00:05:44.655 --rc geninfo_unexecuted_blocks=1 00:05:44.655 00:05:44.655 ' 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.655 --rc genhtml_branch_coverage=1 00:05:44.655 --rc genhtml_function_coverage=1 00:05:44.655 --rc genhtml_legend=1 00:05:44.655 --rc geninfo_all_blocks=1 00:05:44.655 --rc geninfo_unexecuted_blocks=1 00:05:44.655 00:05:44.655 ' 00:05:44.655 04:26:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.655 04:26:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59652 00:05:44.655 04:26:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59652 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59652 ']' 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.655 04:26:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.655 04:26:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.655 [2024-11-03 04:26:07.652326] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:44.655 [2024-11-03 04:26:07.652446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:05:44.913 [2024-11-03 04:26:07.811874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.913 [2024-11-03 04:26:07.906288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.478 04:26:08 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.478 04:26:08 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:45.478 04:26:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:45.738 { 00:05:45.739 "version": "SPDK v25.01-pre git sha1 fa3ab7384", 00:05:45.739 "fields": { 00:05:45.739 "major": 25, 00:05:45.739 "minor": 1, 00:05:45.739 "patch": 0, 00:05:45.739 "suffix": "-pre", 00:05:45.739 "commit": "fa3ab7384" 00:05:45.739 } 00:05:45.739 } 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:45.739 04:26:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:45.739 04:26:08 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.000 request: 00:05:46.000 { 00:05:46.000 "method": "env_dpdk_get_mem_stats", 00:05:46.000 "req_id": 1 00:05:46.000 } 00:05:46.000 Got JSON-RPC error response 00:05:46.000 response: 00:05:46.000 { 00:05:46.000 "code": -32601, 00:05:46.000 "message": "Method not found" 00:05:46.000 } 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.000 04:26:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59652 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59652 ']' 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59652 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59652 00:05:46.000 killing process with pid 59652 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59652' 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@971 -- # kill 59652 00:05:46.000 04:26:08 app_cmdline -- common/autotest_common.sh@976 -- # wait 59652 00:05:47.375 ************************************ 00:05:47.375 END TEST app_cmdline 00:05:47.375 ************************************ 00:05:47.375 00:05:47.375 real 0m2.872s 00:05:47.375 user 0m3.194s 00:05:47.375 sys 0m0.408s 00:05:47.375 04:26:10 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.375 04:26:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.375 04:26:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:47.375 04:26:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.375 04:26:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.375 04:26:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.375 ************************************ 00:05:47.375 START TEST version 00:05:47.375 ************************************ 00:05:47.375 04:26:10 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:47.375 * Looking for test storage... 00:05:47.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:47.375 04:26:10 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.375 04:26:10 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.375 04:26:10 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.636 04:26:10 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.636 04:26:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.636 04:26:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.636 04:26:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.636 04:26:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.636 04:26:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.636 04:26:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.636 04:26:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.636 04:26:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.636 04:26:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.636 04:26:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.636 04:26:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.636 04:26:10 version -- scripts/common.sh@344 -- # case "$op" in 00:05:47.636 04:26:10 version -- scripts/common.sh@345 -- # : 1 00:05:47.636 04:26:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.636 04:26:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.636 04:26:10 version -- scripts/common.sh@365 -- # decimal 1 00:05:47.636 04:26:10 version -- scripts/common.sh@353 -- # local d=1 00:05:47.636 04:26:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.636 04:26:10 version -- scripts/common.sh@355 -- # echo 1 00:05:47.636 04:26:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.636 04:26:10 version -- scripts/common.sh@366 -- # decimal 2 00:05:47.636 04:26:10 version -- scripts/common.sh@353 -- # local d=2 00:05:47.636 04:26:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.636 04:26:10 version -- scripts/common.sh@355 -- # echo 2 00:05:47.636 04:26:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.636 04:26:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.636 04:26:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.636 04:26:10 version -- scripts/common.sh@368 -- # return 0 00:05:47.636 04:26:10 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.636 04:26:10 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.636 --rc genhtml_branch_coverage=1 00:05:47.636 --rc genhtml_function_coverage=1 00:05:47.636 --rc genhtml_legend=1 00:05:47.636 --rc geninfo_all_blocks=1 00:05:47.636 --rc geninfo_unexecuted_blocks=1 00:05:47.636 00:05:47.636 ' 00:05:47.636 04:26:10 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.636 --rc genhtml_branch_coverage=1 00:05:47.636 --rc genhtml_function_coverage=1 00:05:47.636 --rc genhtml_legend=1 00:05:47.636 --rc geninfo_all_blocks=1 00:05:47.636 --rc geninfo_unexecuted_blocks=1 00:05:47.636 00:05:47.636 ' 00:05:47.636 04:26:10 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.636 --rc genhtml_branch_coverage=1 00:05:47.636 --rc genhtml_function_coverage=1 00:05:47.636 --rc genhtml_legend=1 00:05:47.636 --rc geninfo_all_blocks=1 00:05:47.636 --rc geninfo_unexecuted_blocks=1 00:05:47.636 00:05:47.636 ' 00:05:47.636 04:26:10 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.636 --rc genhtml_branch_coverage=1 00:05:47.637 --rc genhtml_function_coverage=1 00:05:47.637 --rc genhtml_legend=1 00:05:47.637 --rc geninfo_all_blocks=1 00:05:47.637 --rc geninfo_unexecuted_blocks=1 00:05:47.637 00:05:47.637 ' 00:05:47.637 04:26:10 version -- app/version.sh@17 -- # get_header_version major 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # cut -f2 00:05:47.637 04:26:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.637 04:26:10 version -- app/version.sh@17 -- # major=25 00:05:47.637 04:26:10 version -- app/version.sh@18 -- # get_header_version minor 00:05:47.637 04:26:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # cut -f2 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.637 04:26:10 version -- app/version.sh@18 -- # minor=1 00:05:47.637 04:26:10 version -- app/version.sh@19 -- # get_header_version patch 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # cut -f2 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.637 04:26:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.637 04:26:10 version -- app/version.sh@19 -- # patch=0 00:05:47.637 04:26:10 version -- app/version.sh@20 -- # get_header_version suffix 00:05:47.637 04:26:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # cut -f2 00:05:47.637 04:26:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.637 04:26:10 version -- app/version.sh@20 -- # suffix=-pre 00:05:47.637 04:26:10 version -- app/version.sh@22 -- # version=25.1 00:05:47.637 04:26:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:47.637 04:26:10 version -- app/version.sh@28 -- # version=25.1rc0 00:05:47.637 04:26:10 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:47.637 04:26:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:47.637 04:26:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:47.637 04:26:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:47.637 00:05:47.637 real 0m0.194s 00:05:47.637 user 0m0.116s 00:05:47.637 sys 0m0.103s 00:05:47.637 04:26:10 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.637 04:26:10 version -- common/autotest_common.sh@10 -- # set +x 00:05:47.637 ************************************ 00:05:47.637 END TEST version 00:05:47.637 ************************************ 00:05:47.637 04:26:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:47.637 04:26:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:47.637 04:26:10 -- spdk/autotest.sh@194 -- # uname -s 00:05:47.637 04:26:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:47.637 04:26:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:47.637 04:26:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:47.637 04:26:10 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:47.637 04:26:10 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:47.637 04:26:10 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:47.637 04:26:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.637 04:26:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.637 ************************************ 00:05:47.637 START TEST blockdev_nvme 00:05:47.637 ************************************ 00:05:47.637 04:26:10 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:47.637 * Looking for test storage... 00:05:47.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:47.637 04:26:10 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.637 04:26:10 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.637 04:26:10 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.896 04:26:10 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.896 04:26:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:47.897 04:26:10 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.897 04:26:10 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.897 04:26:10 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.897 04:26:10 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.897 --rc genhtml_branch_coverage=1 00:05:47.897 --rc genhtml_function_coverage=1 00:05:47.897 --rc genhtml_legend=1 00:05:47.897 --rc geninfo_all_blocks=1 00:05:47.897 --rc geninfo_unexecuted_blocks=1 00:05:47.897 00:05:47.897 ' 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.897 --rc genhtml_branch_coverage=1 00:05:47.897 --rc genhtml_function_coverage=1 00:05:47.897 --rc genhtml_legend=1 00:05:47.897 --rc geninfo_all_blocks=1 00:05:47.897 --rc geninfo_unexecuted_blocks=1 00:05:47.897 00:05:47.897 ' 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.897 --rc genhtml_branch_coverage=1 00:05:47.897 --rc genhtml_function_coverage=1 00:05:47.897 --rc genhtml_legend=1 00:05:47.897 --rc geninfo_all_blocks=1 00:05:47.897 --rc geninfo_unexecuted_blocks=1 00:05:47.897 00:05:47.897 ' 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.897 --rc genhtml_branch_coverage=1 00:05:47.897 --rc genhtml_function_coverage=1 00:05:47.897 --rc genhtml_legend=1 00:05:47.897 --rc geninfo_all_blocks=1 00:05:47.897 --rc geninfo_unexecuted_blocks=1 00:05:47.897 00:05:47.897 ' 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:47.897 04:26:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:47.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59824 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59824 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59824 ']' 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.897 04:26:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:47.897 04:26:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:47.897 [2024-11-03 04:26:10.839926] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:47.897 [2024-11-03 04:26:10.840154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59824 ] 00:05:48.155 [2024-11-03 04:26:10.983665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.155 [2024-11-03 04:26:11.062295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.723 04:26:11 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.723 04:26:11 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:05:48.723 04:26:11 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:48.723 04:26:11 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:48.723 04:26:11 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:48.723 04:26:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:48.723 04:26:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:48.723 04:26:11 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:48.723 04:26:11 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.723 04:26:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.984 04:26:12 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.984 04:26:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:48.984 04:26:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.984 04:26:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.984 04:26:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.984 04:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.244 04:26:12 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.244 04:26:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:49.244 04:26:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:49.244 04:26:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:49.244 04:26:12 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.244 04:26:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.244 04:26:12 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.244 04:26:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:49.245 04:26:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "63f92154-eb5b-4571-a3aa-c52b80dce938"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "63f92154-eb5b-4571-a3aa-c52b80dce938",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "23ca9e0f-0a5c-463d-9a04-c6a5bce7d9e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "23ca9e0f-0a5c-463d-9a04-c6a5bce7d9e0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2b1110c4-49f1-41ca-a480-c35ef0fb18bc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b1110c4-49f1-41ca-a480-c35ef0fb18bc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "759335f8-bf39-4cd9-a13c-c11eef3b3767"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "759335f8-bf39-4cd9-a13c-c11eef3b3767",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "bac5049f-e372-4636-acca-3cd34535de76"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bac5049f-e372-4636-acca-3cd34535de76",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "46bdf5f6-3053-456e-815c-67868f85c24f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "46bdf5f6-3053-456e-815c-67868f85c24f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:49.245 04:26:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:49.245 04:26:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:49.245 04:26:12 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:49.245 04:26:12 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:49.245 04:26:12 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59824 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59824 ']' 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59824 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59824 00:05:49.245 killing process with pid 59824 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59824' 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59824 00:05:49.245 04:26:12 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59824 00:05:50.624 04:26:13 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:50.624 04:26:13 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:50.624 04:26:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:05:50.624 04:26:13 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.624 04:26:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.624 ************************************ 00:05:50.624 START TEST bdev_hello_world 00:05:50.624 ************************************ 00:05:50.624 04:26:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:50.624 [2024-11-03 04:26:13.520696] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:50.624 [2024-11-03 04:26:13.520784] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:05:50.624 [2024-11-03 04:26:13.670468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.882 [2024-11-03 04:26:13.751353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.448 [2024-11-03 04:26:14.241839] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:51.448 [2024-11-03 04:26:14.241885] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:51.448 [2024-11-03 04:26:14.241905] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:51.448 [2024-11-03 04:26:14.244338] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:51.448 [2024-11-03 04:26:14.244865] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:51.448 [2024-11-03 04:26:14.244900] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:51.448 [2024-11-03 04:26:14.245127] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:51.448 00:05:51.448 [2024-11-03 04:26:14.245149] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:52.014 ************************************ 00:05:52.014 END TEST bdev_hello_world 00:05:52.014 ************************************ 00:05:52.014 00:05:52.014 real 0m1.475s 00:05:52.014 user 0m1.220s 00:05:52.014 sys 0m0.149s 00:05:52.014 04:26:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.014 04:26:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:52.014 04:26:14 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:52.014 04:26:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:52.014 04:26:14 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.014 04:26:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.014 ************************************ 00:05:52.014 START TEST bdev_bounds 00:05:52.014 ************************************ 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59939 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59939' 00:05:52.014 Process bdevio pid: 59939 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59939 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59939 ']' 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.014 04:26:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:52.014 [2024-11-03 04:26:15.052782] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:52.014 [2024-11-03 04:26:15.053050] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:05:52.272 [2024-11-03 04:26:15.213551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.272 [2024-11-03 04:26:15.317463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.272 [2024-11-03 04:26:15.317775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.272 [2024-11-03 04:26:15.317792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.837 04:26:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.837 04:26:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:05:52.837 04:26:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:53.096 I/O targets: 00:05:53.096 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:53.096 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:53.096 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:53.096 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:53.096 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:53.096 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:53.096 00:05:53.096 00:05:53.096 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.096 http://cunit.sourceforge.net/ 00:05:53.096 00:05:53.096 00:05:53.096 Suite: bdevio tests on: Nvme3n1 00:05:53.096 Test: blockdev write read block ...passed 00:05:53.096 Test: blockdev write zeroes read block ...passed 00:05:53.096 Test: blockdev write zeroes read no split ...passed 00:05:53.096 Test: blockdev write zeroes read split ...passed 00:05:53.096 Test: blockdev write zeroes read split partial ...passed 00:05:53.096 Test: blockdev reset ...[2024-11-03 04:26:16.036769] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:53.096 [2024-11-03 04:26:16.039689] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:53.096 passed 00:05:53.096 Test: blockdev write read 8 blocks ...passed 00:05:53.096 Test: blockdev write read size > 128k ...passed 00:05:53.096 Test: blockdev write read invalid size ...passed 00:05:53.096 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.096 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.096 Test: blockdev write read max offset ...passed 00:05:53.096 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.096 Test: blockdev writev readv 8 blocks ...passed 00:05:53.096 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.096 Test: blockdev writev readv block ...passed 00:05:53.096 Test: blockdev writev readv size > 128k ...passed 00:05:53.096 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.096 Test: blockdev comparev and writev ...[2024-11-03 04:26:16.046964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb60a000 len:0x1000 00:05:53.096 [2024-11-03 04:26:16.047102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.096 passed 00:05:53.096 Test: blockdev nvme passthru rw ...passed 00:05:53.096 Test: blockdev nvme passthru vendor specific ...[2024-11-03 04:26:16.047889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:53.096 [2024-11-03 04:26:16.048012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:53.096 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:53.096 passed 00:05:53.096 Test: blockdev copy ...passed 00:05:53.096 Suite: bdevio tests on: Nvme2n3 00:05:53.096 Test: blockdev write read block ...passed 00:05:53.096 Test: blockdev write zeroes read block ...passed 00:05:53.096 Test: blockdev write zeroes read no split ...passed 00:05:53.096 Test: blockdev write zeroes read split ...passed 00:05:53.096 Test: blockdev write zeroes read split partial ...passed 00:05:53.096 Test: blockdev reset ...[2024-11-03 04:26:16.102349] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:53.096 passed 00:05:53.096 Test: blockdev write read 8 blocks ...[2024-11-03 04:26:16.105653] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:53.096 passed 00:05:53.096 Test: blockdev write read size > 128k ...passed 00:05:53.096 Test: blockdev write read invalid size ...passed 00:05:53.096 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.096 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.096 Test: blockdev write read max offset ...passed 00:05:53.096 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.096 Test: blockdev writev readv 8 blocks ...passed 00:05:53.096 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.096 Test: blockdev writev readv block ...passed 00:05:53.096 Test: blockdev writev readv size > 128k ...passed 00:05:53.096 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.096 Test: blockdev comparev and writev ...[2024-11-03 04:26:16.112182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29e806000 len:0x1000 00:05:53.096 [2024-11-03 04:26:16.112311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.096 passed 00:05:53.096 Test: blockdev nvme passthru rw ...passed 00:05:53.096 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.096 Test: blockdev nvme admin passthru ...[2024-11-03 04:26:16.113201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:53.096 [2024-11-03 04:26:16.113258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:53.096 passed 00:05:53.096 Test: blockdev copy ...passed 00:05:53.096 Suite: bdevio tests on: Nvme2n2 00:05:53.096 Test: blockdev write read block ...passed 00:05:53.096 Test: blockdev write zeroes read block ...passed 00:05:53.096 Test: blockdev write zeroes read no split ...passed 00:05:53.096 Test: blockdev write zeroes read split ...passed 00:05:53.355 Test: blockdev write zeroes read split partial ...passed 00:05:53.355 Test: blockdev reset ...[2024-11-03 04:26:16.182044] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:53.355 [2024-11-03 04:26:16.184926] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:05:53.355 Test: blockdev write read 8 blocks ...successful. 00:05:53.355 passed 00:05:53.355 Test: blockdev write read size > 128k ...passed 00:05:53.355 Test: blockdev write read invalid size ...passed 00:05:53.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.355 Test: blockdev write read max offset ...passed 00:05:53.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.355 Test: blockdev writev readv 8 blocks ...passed 00:05:53.355 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.355 Test: blockdev writev readv block ...passed 00:05:53.355 Test: blockdev writev readv size > 128k ...passed 00:05:53.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.355 Test: blockdev comparev and writev ...[2024-11-03 04:26:16.192510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e3c000 len:0x1000 00:05:53.355 [2024-11-03 04:26:16.192552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev nvme passthru rw ...passed 00:05:53.355 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.355 Test: blockdev nvme admin passthru ...[2024-11-03 04:26:16.193322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:53.355 [2024-11-03 04:26:16.193364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev copy ...passed 00:05:53.355 Suite: bdevio tests on: Nvme2n1 00:05:53.355 Test: blockdev write read block ...passed 00:05:53.355 Test: blockdev write zeroes read block ...passed 00:05:53.355 Test: blockdev write zeroes read no split ...passed 00:05:53.355 Test: blockdev write zeroes read split ...passed 00:05:53.355 Test: blockdev write zeroes read split partial ...passed 00:05:53.355 Test: blockdev reset ...[2024-11-03 04:26:16.261205] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:53.355 [2024-11-03 04:26:16.264053] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:05:53.355 Test: blockdev write read 8 blocks ...successful. 00:05:53.355 passed 00:05:53.355 Test: blockdev write read size > 128k ...passed 00:05:53.355 Test: blockdev write read invalid size ...passed 00:05:53.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.355 Test: blockdev write read max offset ...passed 00:05:53.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.355 Test: blockdev writev readv 8 blocks ...passed 00:05:53.355 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.355 Test: blockdev writev readv block ...passed 00:05:53.355 Test: blockdev writev readv size > 128k ...passed 00:05:53.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.355 Test: blockdev comparev and writev ...[2024-11-03 04:26:16.272571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e38000 len:0x1000 00:05:53.355 [2024-11-03 04:26:16.272620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev nvme passthru rw ...passed 00:05:53.355 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.355 Test: blockdev nvme admin passthru ...[2024-11-03 04:26:16.273343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:53.355 [2024-11-03 04:26:16.273384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev copy ...passed 00:05:53.355 Suite: bdevio tests on: Nvme1n1 00:05:53.355 Test: blockdev write read block ...passed 00:05:53.355 Test: blockdev write zeroes read block ...passed 00:05:53.355 Test: blockdev write zeroes read no split ...passed 00:05:53.355 Test: blockdev write zeroes read split ...passed 00:05:53.355 Test: blockdev write zeroes read split partial ...passed 00:05:53.355 Test: blockdev reset ...[2024-11-03 04:26:16.324093] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:53.355 [2024-11-03 04:26:16.326515] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:53.355 passed 00:05:53.355 Test: blockdev write read 8 blocks ...passed 00:05:53.355 Test: blockdev write read size > 128k ...passed 00:05:53.355 Test: blockdev write read invalid size ...passed 00:05:53.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.355 Test: blockdev write read max offset ...passed 00:05:53.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.355 Test: blockdev writev readv 8 blocks ...passed 00:05:53.355 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.355 Test: blockdev writev readv block ...passed 00:05:53.355 Test: blockdev writev readv size > 128k ...passed 00:05:53.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.355 Test: blockdev comparev and writev ...[2024-11-03 04:26:16.334705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e34000 len:0x1000 00:05:53.355 [2024-11-03 04:26:16.334831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev nvme passthru rw ...passed 00:05:53.355 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.355 Test: blockdev nvme admin passthru ...[2024-11-03 04:26:16.335991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:53.355 [2024-11-03 04:26:16.336022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev copy ...passed 00:05:53.355 Suite: bdevio tests on: Nvme0n1 00:05:53.355 Test: blockdev write read block ...passed 00:05:53.355 Test: blockdev write zeroes read block ...passed 00:05:53.355 Test: blockdev write zeroes read no split ...passed 00:05:53.355 Test: blockdev write zeroes read split ...passed 00:05:53.355 Test: blockdev write zeroes read split partial ...passed 00:05:53.355 Test: blockdev reset ...[2024-11-03 04:26:16.390400] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:53.355 [2024-11-03 04:26:16.392802] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller passedsuccessful. 00:05:53.355 00:05:53.355 Test: blockdev write read 8 blocks ...passed 00:05:53.355 Test: blockdev write read size > 128k ...passed 00:05:53.355 Test: blockdev write read invalid size ...passed 00:05:53.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.355 Test: blockdev write read max offset ...passed 00:05:53.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.355 Test: blockdev writev readv 8 blocks ...passed 00:05:53.355 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.355 Test: blockdev writev readv block ...passed 00:05:53.355 Test: blockdev writev readv size > 128k ...passed 00:05:53.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.355 Test: blockdev comparev and writev ...passed 00:05:53.355 Test: blockdev nvme passthru rw ...[2024-11-03 04:26:16.399234] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:53.355 separate metadata which is not supported yet. 00:05:53.355 passed 00:05:53.355 Test: blockdev nvme passthru vendor specific ...[2024-11-03 04:26:16.399796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:53.355 [2024-11-03 04:26:16.400165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:05:53.355 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:05:53.355 passed 00:05:53.355 Test: blockdev copy ...passed 00:05:53.355 00:05:53.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.355 suites 6 6 n/a 0 0 00:05:53.355 tests 138 138 138 0 0 00:05:53.355 asserts 893 893 893 0 n/a 00:05:53.355 00:05:53.355 Elapsed time = 1.079 seconds 00:05:53.355 0 00:05:53.355 04:26:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59939 00:05:53.355 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59939 ']' 00:05:53.355 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59939 00:05:53.355 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:05:53.355 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.355 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59939 00:05:53.614 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.614 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.614 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59939' 00:05:53.614 killing process with pid 59939 00:05:53.614 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59939 00:05:53.614 04:26:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59939 00:05:54.180 04:26:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:54.180 00:05:54.180 real 0m2.097s 00:05:54.180 user 0m5.356s 00:05:54.180 sys 0m0.272s 00:05:54.180 04:26:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.180 04:26:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:54.180 ************************************ 00:05:54.180 END TEST bdev_bounds 00:05:54.180 ************************************ 00:05:54.180 04:26:17 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:54.180 04:26:17 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:54.180 04:26:17 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.180 04:26:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.180 ************************************ 00:05:54.180 START TEST bdev_nbd 00:05:54.180 ************************************ 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59993 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59993 /var/tmp/spdk-nbd.sock 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 59993 ']' 00:05:54.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.180 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.181 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.181 04:26:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:54.181 04:26:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:54.181 [2024-11-03 04:26:17.200714] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:54.181 [2024-11-03 04:26:17.200972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.439 [2024-11-03 04:26:17.356067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.439 [2024-11-03 04:26:17.435878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:55.004 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.263 1+0 records in 00:05:55.263 1+0 records out 00:05:55.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458966 s, 8.9 MB/s 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:55.263 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.521 1+0 records in 00:05:55.521 1+0 records out 00:05:55.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378191 s, 10.8 MB/s 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:55.521 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.780 1+0 records in 00:05:55.780 1+0 records out 00:05:55.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494477 s, 8.3 MB/s 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:55.780 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:56.038 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:56.038 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:56.039 04:26:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:56.039 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:05:56.039 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:56.039 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:56.039 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:56.039 04:26:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.039 1+0 records in 00:05:56.039 1+0 records out 00:05:56.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383307 s, 10.7 MB/s 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.039 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.299 1+0 records in 00:05:56.299 1+0 records out 00:05:56.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348765 s, 11.7 MB/s 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.299 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.558 1+0 records in 00:05:56.558 1+0 records out 00:05:56.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322689 s, 12.7 MB/s 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.558 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd0", 00:05:56.816 "bdev_name": "Nvme0n1" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd1", 00:05:56.816 "bdev_name": "Nvme1n1" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd2", 00:05:56.816 "bdev_name": "Nvme2n1" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd3", 00:05:56.816 "bdev_name": "Nvme2n2" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd4", 00:05:56.816 "bdev_name": "Nvme2n3" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd5", 00:05:56.816 "bdev_name": "Nvme3n1" 00:05:56.816 } 00:05:56.816 ]' 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd0", 00:05:56.816 "bdev_name": "Nvme0n1" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd1", 00:05:56.816 "bdev_name": "Nvme1n1" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd2", 00:05:56.816 "bdev_name": "Nvme2n1" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd3", 00:05:56.816 "bdev_name": "Nvme2n2" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd4", 00:05:56.816 "bdev_name": "Nvme2n3" 00:05:56.816 }, 00:05:56.816 { 00:05:56.816 "nbd_device": "/dev/nbd5", 00:05:56.816 "bdev_name": "Nvme3n1" 00:05:56.816 } 00:05:56.816 ]' 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.816 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.073 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.073 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.073 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.073 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.073 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.074 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.074 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.074 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.074 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.074 04:26:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.074 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.332 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.591 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:57.849 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.850 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.107 04:26:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.107 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.108 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.108 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:58.366 /dev/nbd0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:58.366 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.366 1+0 records in 00:05:58.366 1+0 records out 00:05:58.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383849 s, 10.7 MB/s 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:58.625 /dev/nbd1 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.625 1+0 records in 00:05:58.625 1+0 records out 00:05:58.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248015 s, 16.5 MB/s 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:58.625 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.626 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:58.626 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:58.626 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.626 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.626 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:58.883 /dev/nbd10 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:58.883 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.883 1+0 records in 00:05:58.883 1+0 records out 00:05:58.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038505 s, 10.6 MB/s 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.884 04:26:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:59.142 /dev/nbd11 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.142 1+0 records in 00:05:59.142 1+0 records out 00:05:59.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350661 s, 11.7 MB/s 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.142 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:59.401 /dev/nbd12 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.401 1+0 records in 00:05:59.401 1+0 records out 00:05:59.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515525 s, 7.9 MB/s 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.401 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:59.659 /dev/nbd13 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.659 1+0 records in 00:05:59.659 1+0 records out 00:05:59.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362229 s, 11.3 MB/s 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.659 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd0", 00:05:59.918 "bdev_name": "Nvme0n1" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd1", 00:05:59.918 "bdev_name": "Nvme1n1" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd10", 00:05:59.918 "bdev_name": "Nvme2n1" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd11", 00:05:59.918 "bdev_name": "Nvme2n2" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd12", 00:05:59.918 "bdev_name": "Nvme2n3" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd13", 00:05:59.918 "bdev_name": "Nvme3n1" 00:05:59.918 } 00:05:59.918 ]' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd0", 00:05:59.918 "bdev_name": "Nvme0n1" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd1", 00:05:59.918 "bdev_name": "Nvme1n1" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd10", 00:05:59.918 "bdev_name": "Nvme2n1" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd11", 00:05:59.918 "bdev_name": "Nvme2n2" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd12", 00:05:59.918 "bdev_name": "Nvme2n3" 00:05:59.918 }, 00:05:59.918 { 00:05:59.918 "nbd_device": "/dev/nbd13", 00:05:59.918 "bdev_name": "Nvme3n1" 00:05:59.918 } 00:05:59.918 ]' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.918 /dev/nbd1 00:05:59.918 /dev/nbd10 00:05:59.918 /dev/nbd11 00:05:59.918 /dev/nbd12 00:05:59.918 /dev/nbd13' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.918 /dev/nbd1 00:05:59.918 /dev/nbd10 00:05:59.918 /dev/nbd11 00:05:59.918 /dev/nbd12 00:05:59.918 /dev/nbd13' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:59.918 256+0 records in 00:05:59.918 256+0 records out 00:05:59.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697367 s, 150 MB/s 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.918 256+0 records in 00:05:59.918 256+0 records out 00:05:59.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.052435 s, 20.0 MB/s 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.918 256+0 records in 00:05:59.918 256+0 records out 00:05:59.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0506997 s, 20.7 MB/s 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.918 04:26:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0514437 s, 20.4 MB/s 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0507688 s, 20.7 MB/s 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0500364 s, 21.0 MB/s 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0507565 s, 20.7 MB/s 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.176 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.177 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.435 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.693 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.952 04:26:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:00.952 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:00.952 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:00.952 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:00.952 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.952 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.952 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.210 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.470 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.728 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:01.729 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:01.989 malloc_lvol_verify 00:06:01.989 04:26:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:02.248 b43ea9d0-10ca-4f26-96ed-57f874b666e9 00:06:02.248 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:02.248 83261436-60a9-4083-9d47-9fa3d5cee841 00:06:02.248 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:02.506 /dev/nbd0 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:02.506 mke2fs 1.47.0 (5-Feb-2023) 00:06:02.506 Discarding device blocks: 0/4096 done 00:06:02.506 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:02.506 00:06:02.506 Allocating group tables: 0/1 done 00:06:02.506 Writing inode tables: 0/1 done 00:06:02.506 Creating journal (1024 blocks): done 00:06:02.506 Writing superblocks and filesystem accounting information: 0/1 done 00:06:02.506 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.506 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59993 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 59993 ']' 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 59993 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59993 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59993' 00:06:02.765 killing process with pid 59993 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 59993 00:06:02.765 04:26:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 59993 00:06:03.347 04:26:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:03.347 00:06:03.347 real 0m9.234s 00:06:03.347 user 0m13.486s 00:06:03.347 sys 0m2.936s 00:06:03.347 04:26:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.347 04:26:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:03.347 ************************************ 00:06:03.347 END TEST bdev_nbd 00:06:03.347 ************************************ 00:06:03.347 04:26:26 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:03.347 04:26:26 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:03.347 skipping fio tests on NVMe due to multi-ns failures. 00:06:03.347 04:26:26 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:03.347 04:26:26 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:03.347 04:26:26 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:03.347 04:26:26 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:03.347 04:26:26 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.348 04:26:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.348 ************************************ 00:06:03.348 START TEST bdev_verify 00:06:03.348 ************************************ 00:06:03.348 04:26:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:03.606 [2024-11-03 04:26:26.461692] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:03.606 [2024-11-03 04:26:26.461784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60361 ] 00:06:03.606 [2024-11-03 04:26:26.611587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.864 [2024-11-03 04:26:26.693141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.864 [2024-11-03 04:26:26.693229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.429 Running I/O for 5 seconds... 00:06:06.291 23104.00 IOPS, 90.25 MiB/s [2024-11-03T04:26:30.757Z] 23488.00 IOPS, 91.75 MiB/s [2024-11-03T04:26:31.698Z] 22933.33 IOPS, 89.58 MiB/s [2024-11-03T04:26:32.639Z] 22144.00 IOPS, 86.50 MiB/s [2024-11-03T04:26:32.639Z] 21708.80 IOPS, 84.80 MiB/s 00:06:09.555 Latency(us) 00:06:09.555 [2024-11-03T04:26:32.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:09.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.555 Verification LBA range: start 0x0 length 0xbd0bd 00:06:09.555 Nvme0n1 : 5.04 1803.42 7.04 0.00 0.00 70743.32 10586.58 66544.25 00:06:09.555 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.555 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:09.555 Nvme0n1 : 5.05 1772.54 6.92 0.00 0.00 71983.77 11947.72 70173.93 00:06:09.555 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.555 Verification LBA range: start 0x0 length 0xa0000 00:06:09.555 Nvme1n1 : 5.04 1802.86 7.04 0.00 0.00 70681.31 12905.55 62914.56 00:06:09.555 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.555 Verification LBA range: start 0xa0000 length 0xa0000 00:06:09.555 Nvme1n1 : 5.06 1771.63 6.92 0.00 0.00 71843.94 14216.27 66947.54 00:06:09.555 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.555 Verification LBA range: start 0x0 length 0x80000 00:06:09.556 Nvme2n1 : 5.04 1802.32 7.04 0.00 0.00 70621.88 14115.45 63721.16 00:06:09.556 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x80000 length 0x80000 00:06:09.556 Nvme2n1 : 5.06 1770.94 6.92 0.00 0.00 71725.60 14518.74 67754.14 00:06:09.556 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x0 length 0x80000 00:06:09.556 Nvme2n2 : 5.05 1811.48 7.08 0.00 0.00 70226.73 5494.94 62107.96 00:06:09.556 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x80000 length 0x80000 00:06:09.556 Nvme2n2 : 5.06 1770.48 6.92 0.00 0.00 71597.23 14821.22 67350.84 00:06:09.556 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x0 length 0x80000 00:06:09.556 Nvme2n3 : 5.05 1810.74 7.07 0.00 0.00 70142.50 6503.19 65737.65 00:06:09.556 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x80000 length 0x80000 00:06:09.556 Nvme2n3 : 5.07 1779.56 6.95 0.00 0.00 71170.95 5318.50 68157.44 00:06:09.556 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x0 length 0x20000 00:06:09.556 Nvme3n1 : 5.06 1810.01 7.07 0.00 0.00 70054.95 7612.26 66947.54 00:06:09.556 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:09.556 Verification LBA range: start 0x20000 length 0x20000 00:06:09.556 Nvme3n1 : 5.07 1779.09 6.95 0.00 0.00 71108.53 5646.18 69770.63 00:06:09.556 [2024-11-03T04:26:32.640Z] =================================================================================================================== 00:06:09.556 [2024-11-03T04:26:32.640Z] Total : 21485.06 83.93 0.00 0.00 70986.30 5318.50 70173.93 00:06:10.495 00:06:10.495 real 0m7.045s 00:06:10.495 user 0m13.209s 00:06:10.495 sys 0m0.212s 00:06:10.495 ************************************ 00:06:10.495 END TEST bdev_verify 00:06:10.495 ************************************ 00:06:10.495 04:26:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.495 04:26:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:10.495 04:26:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:10.495 04:26:33 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:10.495 04:26:33 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.495 04:26:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.495 ************************************ 00:06:10.495 START TEST bdev_verify_big_io 00:06:10.495 ************************************ 00:06:10.495 04:26:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:10.495 [2024-11-03 04:26:33.572329] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:10.495 [2024-11-03 04:26:33.572445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:06:10.754 [2024-11-03 04:26:33.732482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.754 [2024-11-03 04:26:33.833384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.754 [2024-11-03 04:26:33.833394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.691 Running I/O for 5 seconds... 00:06:14.982 0.00 IOPS, 0.00 MiB/s [2024-11-03T04:26:40.598Z] 1073.50 IOPS, 67.09 MiB/s [2024-11-03T04:26:40.856Z] 1792.00 IOPS, 112.00 MiB/s 00:06:17.772 Latency(us) 00:06:17.772 [2024-11-03T04:26:40.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.772 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x0 length 0xbd0b 00:06:17.772 Nvme0n1 : 5.90 107.80 6.74 0.00 0.00 1151588.16 20971.52 1226027.32 00:06:17.772 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:17.772 Nvme0n1 : 5.70 112.36 7.02 0.00 0.00 1094857.26 21677.29 1232480.10 00:06:17.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x0 length 0xa000 00:06:17.772 Nvme1n1 : 5.90 104.21 6.51 0.00 0.00 1129090.77 100421.32 1019538.51 00:06:17.772 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0xa000 length 0xa000 00:06:17.772 Nvme1n1 : 5.70 112.33 7.02 0.00 0.00 1054912.83 119376.34 1025991.29 00:06:17.772 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x0 length 0x8000 00:06:17.772 Nvme2n1 : 5.90 108.41 6.78 0.00 0.00 1062011.83 101631.21 1051802.39 00:06:17.772 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x8000 length 0x8000 00:06:17.772 Nvme2n1 : 5.87 119.93 7.50 0.00 0.00 959426.99 66947.54 967916.31 00:06:17.772 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x0 length 0x8000 00:06:17.772 Nvme2n2 : 5.99 110.60 6.91 0.00 0.00 1000566.15 88725.66 1077613.49 00:06:17.772 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x8000 length 0x8000 00:06:17.772 Nvme2n2 : 5.98 124.34 7.77 0.00 0.00 889542.61 58478.28 1000180.18 00:06:17.772 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x0 length 0x8000 00:06:17.772 Nvme2n3 : 6.08 121.83 7.61 0.00 0.00 885130.86 45976.02 1116330.14 00:06:17.772 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x8000 length 0x8000 00:06:17.772 Nvme2n3 : 6.00 131.17 8.20 0.00 0.00 818506.53 22080.59 1238932.87 00:06:17.772 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x0 length 0x2000 00:06:17.772 Nvme3n1 : 6.09 130.48 8.15 0.00 0.00 798683.35 617.55 1135688.47 00:06:17.772 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:17.772 Verification LBA range: start 0x2000 length 0x2000 00:06:17.772 Nvme3n1 : 6.09 150.74 9.42 0.00 0.00 688972.64 680.57 2219754.73 00:06:17.772 [2024-11-03T04:26:40.856Z] =================================================================================================================== 00:06:17.772 [2024-11-03T04:26:40.856Z] Total : 1434.18 89.64 0.00 0.00 944579.45 617.55 2219754.73 00:06:19.144 00:06:19.144 real 0m8.380s 00:06:19.144 user 0m15.886s 00:06:19.144 sys 0m0.221s 00:06:19.144 04:26:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.144 04:26:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:19.144 ************************************ 00:06:19.144 END TEST bdev_verify_big_io 00:06:19.144 ************************************ 00:06:19.144 04:26:41 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.144 04:26:41 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:19.144 04:26:41 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.144 04:26:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.144 ************************************ 00:06:19.144 START TEST bdev_write_zeroes 00:06:19.144 ************************************ 00:06:19.144 04:26:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:19.144 [2024-11-03 04:26:42.002888] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:19.144 [2024-11-03 04:26:42.003005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:06:19.144 [2024-11-03 04:26:42.158487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.403 [2024-11-03 04:26:42.235880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.968 Running I/O for 1 seconds... 00:06:20.983 76416.00 IOPS, 298.50 MiB/s 00:06:20.983 Latency(us) 00:06:20.983 [2024-11-03T04:26:44.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.983 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.983 Nvme0n1 : 1.02 12644.19 49.39 0.00 0.00 10101.83 8368.44 22887.19 00:06:20.983 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.983 Nvme1n1 : 1.02 12629.38 49.33 0.00 0.00 10104.22 8368.44 22584.71 00:06:20.983 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.983 Nvme2n1 : 1.02 12615.14 49.28 0.00 0.00 10074.40 8519.68 20971.52 00:06:20.983 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.983 Nvme2n2 : 1.03 12600.94 49.22 0.00 0.00 10051.15 8368.44 18854.20 00:06:20.983 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.983 Nvme2n3 : 1.03 12586.72 49.17 0.00 0.00 10031.03 8368.44 18350.08 00:06:20.983 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.983 Nvme3n1 : 1.03 12572.52 49.11 0.00 0.00 10009.77 8116.38 19963.27 00:06:20.983 [2024-11-03T04:26:44.067Z] =================================================================================================================== 00:06:20.983 [2024-11-03T04:26:44.067Z] Total : 75648.90 295.50 0.00 0.00 10062.06 8116.38 22887.19 00:06:21.551 00:06:21.551 real 0m2.576s 00:06:21.551 user 0m2.283s 00:06:21.551 sys 0m0.182s 00:06:21.551 04:26:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.551 04:26:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:21.551 ************************************ 00:06:21.551 END TEST bdev_write_zeroes 00:06:21.551 ************************************ 00:06:21.551 04:26:44 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:21.551 04:26:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:21.551 04:26:44 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.551 04:26:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.551 ************************************ 00:06:21.551 START TEST bdev_json_nonenclosed 00:06:21.551 ************************************ 00:06:21.551 04:26:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:21.551 [2024-11-03 04:26:44.616757] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:21.551 [2024-11-03 04:26:44.616873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60624 ] 00:06:21.810 [2024-11-03 04:26:44.775605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.810 [2024-11-03 04:26:44.873037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.810 [2024-11-03 04:26:44.873115] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:21.810 [2024-11-03 04:26:44.873132] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:21.810 [2024-11-03 04:26:44.873141] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.068 00:06:22.068 real 0m0.495s 00:06:22.068 user 0m0.300s 00:06:22.068 sys 0m0.091s 00:06:22.068 04:26:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.068 ************************************ 00:06:22.068 END TEST bdev_json_nonenclosed 00:06:22.068 ************************************ 00:06:22.068 04:26:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:22.068 04:26:45 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.068 04:26:45 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:22.068 04:26:45 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.068 04:26:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.068 ************************************ 00:06:22.069 START TEST bdev_json_nonarray 00:06:22.069 ************************************ 00:06:22.069 04:26:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.327 [2024-11-03 04:26:45.164971] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:22.327 [2024-11-03 04:26:45.165091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60644 ] 00:06:22.327 [2024-11-03 04:26:45.326625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.584 [2024-11-03 04:26:45.424038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.584 [2024-11-03 04:26:45.424116] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:22.584 [2024-11-03 04:26:45.424133] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:22.584 [2024-11-03 04:26:45.424142] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.584 00:06:22.584 real 0m0.499s 00:06:22.584 user 0m0.299s 00:06:22.584 sys 0m0.094s 00:06:22.584 ************************************ 00:06:22.584 END TEST bdev_json_nonarray 00:06:22.584 04:26:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.584 04:26:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:22.584 ************************************ 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:22.584 04:26:45 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:22.584 ************************************ 00:06:22.584 END TEST blockdev_nvme 00:06:22.584 ************************************ 00:06:22.584 00:06:22.584 real 0m35.021s 00:06:22.584 user 0m55.035s 00:06:22.584 sys 0m4.827s 00:06:22.584 04:26:45 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.584 04:26:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.842 04:26:45 -- spdk/autotest.sh@209 -- # uname -s 00:06:22.842 04:26:45 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:22.842 04:26:45 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:22.842 04:26:45 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:22.842 04:26:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.842 04:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.842 ************************************ 00:06:22.842 START TEST blockdev_nvme_gpt 00:06:22.842 ************************************ 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:22.842 * Looking for test storage... 00:06:22.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.842 04:26:45 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.842 --rc genhtml_branch_coverage=1 00:06:22.842 --rc genhtml_function_coverage=1 00:06:22.842 --rc genhtml_legend=1 00:06:22.842 --rc geninfo_all_blocks=1 00:06:22.842 --rc geninfo_unexecuted_blocks=1 00:06:22.842 00:06:22.842 ' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.842 --rc genhtml_branch_coverage=1 00:06:22.842 --rc genhtml_function_coverage=1 00:06:22.842 --rc genhtml_legend=1 00:06:22.842 --rc geninfo_all_blocks=1 00:06:22.842 --rc geninfo_unexecuted_blocks=1 00:06:22.842 00:06:22.842 ' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.842 --rc genhtml_branch_coverage=1 00:06:22.842 --rc genhtml_function_coverage=1 00:06:22.842 --rc genhtml_legend=1 00:06:22.842 --rc geninfo_all_blocks=1 00:06:22.842 --rc geninfo_unexecuted_blocks=1 00:06:22.842 00:06:22.842 ' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.842 --rc genhtml_branch_coverage=1 00:06:22.842 --rc genhtml_function_coverage=1 00:06:22.842 --rc genhtml_legend=1 00:06:22.842 --rc geninfo_all_blocks=1 00:06:22.842 --rc geninfo_unexecuted_blocks=1 00:06:22.842 00:06:22.842 ' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60728 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60728 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60728 ']' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.842 04:26:45 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:22.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.842 04:26:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:22.842 [2024-11-03 04:26:45.903997] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:22.842 [2024-11-03 04:26:45.904257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:06:23.100 [2024-11-03 04:26:46.064015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.100 [2024-11-03 04:26:46.159172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.665 04:26:46 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.665 04:26:46 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:06:23.665 04:26:46 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:23.665 04:26:46 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:23.665 04:26:46 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:23.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.180 Waiting for block devices as requested 00:06:24.180 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.440 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.440 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.440 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:29.703 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:29.703 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:29.703 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:29.704 BYT; 00:06:29.704 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:29.704 BYT; 00:06:29.704 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:29.704 04:26:52 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:29.704 04:26:52 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:30.635 The operation has completed successfully. 00:06:30.635 04:26:53 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:31.568 The operation has completed successfully. 00:06:31.568 04:26:54 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:32.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.391 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.391 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.391 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.649 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:32.649 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:32.649 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.649 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.649 [] 00:06:32.649 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.649 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:32.649 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:32.649 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:32.649 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:32.649 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:32.649 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.649 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:32.908 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.908 04:26:55 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.167 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:33.167 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:33.168 04:26:55 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6a3ebbef-4084-4cb1-9f42-161134584f1b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6a3ebbef-4084-4cb1-9f42-161134584f1b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3fea7f3c-95dd-4f28-9f6e-e0004b462106"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3fea7f3c-95dd-4f28-9f6e-e0004b462106",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e1634a27-c897-477d-beb8-35e75c98d48b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1634a27-c897-477d-beb8-35e75c98d48b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6913af75-c3c2-490a-9fc5-463b28f48d15"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6913af75-c3c2-490a-9fc5-463b28f48d15",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4d87d79b-0109-45fc-9f4c-f35b2cd672f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4d87d79b-0109-45fc-9f4c-f35b2cd672f7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:33.168 04:26:56 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:33.168 04:26:56 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:33.168 04:26:56 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:33.168 04:26:56 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60728 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60728 ']' 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60728 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60728 00:06:33.168 killing process with pid 60728 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60728' 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60728 00:06:33.168 04:26:56 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60728 00:06:34.540 04:26:57 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:34.540 04:26:57 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:34.540 04:26:57 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:34.540 04:26:57 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.540 04:26:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.540 ************************************ 00:06:34.540 START TEST bdev_hello_world 00:06:34.540 ************************************ 00:06:34.540 04:26:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:34.540 [2024-11-03 04:26:57.504339] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:34.540 [2024-11-03 04:26:57.504587] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61346 ] 00:06:34.798 [2024-11-03 04:26:57.662039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.798 [2024-11-03 04:26:57.762903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.365 [2024-11-03 04:26:58.299810] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:35.365 [2024-11-03 04:26:58.299858] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:35.365 [2024-11-03 04:26:58.299876] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:35.365 [2024-11-03 04:26:58.302308] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:35.365 [2024-11-03 04:26:58.302792] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:35.365 [2024-11-03 04:26:58.302820] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:35.365 [2024-11-03 04:26:58.303030] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:35.365 00:06:35.365 [2024-11-03 04:26:58.303053] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:35.932 ************************************ 00:06:35.932 END TEST bdev_hello_world 00:06:35.932 ************************************ 00:06:35.932 00:06:35.932 real 0m1.530s 00:06:35.932 user 0m1.260s 00:06:35.932 sys 0m0.164s 00:06:35.932 04:26:58 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.932 04:26:58 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:35.932 04:26:59 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:35.932 04:26:59 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:35.932 04:26:59 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.932 04:26:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.932 ************************************ 00:06:35.932 START TEST bdev_bounds 00:06:35.932 ************************************ 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61388 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.932 Process bdevio pid: 61388 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61388' 00:06:35.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61388 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61388 ']' 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.932 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:36.190 [2024-11-03 04:26:59.060551] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:36.190 [2024-11-03 04:26:59.060657] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61388 ] 00:06:36.190 [2024-11-03 04:26:59.208160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.448 [2024-11-03 04:26:59.289523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.448 [2024-11-03 04:26:59.289595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.448 [2024-11-03 04:26:59.289617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.015 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.015 04:26:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:37.015 04:26:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:37.015 I/O targets: 00:06:37.015 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:37.015 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:37.015 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:37.015 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:37.015 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:37.015 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:37.015 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:37.015 00:06:37.015 00:06:37.015 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.015 http://cunit.sourceforge.net/ 00:06:37.015 00:06:37.015 00:06:37.015 Suite: bdevio tests on: Nvme3n1 00:06:37.015 Test: blockdev write read block ...passed 00:06:37.015 Test: blockdev write zeroes read block ...passed 00:06:37.015 Test: blockdev write zeroes read no split ...passed 00:06:37.015 Test: blockdev write zeroes read split ...passed 00:06:37.015 Test: blockdev write zeroes read split partial ...passed 00:06:37.015 Test: blockdev reset ...[2024-11-03 04:27:00.048093] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:37.015 passed 00:06:37.015 Test: blockdev write read 8 blocks ...[2024-11-03 04:27:00.050773] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:37.015 passed 00:06:37.015 Test: blockdev write read size > 128k ...passed 00:06:37.015 Test: blockdev write read invalid size ...passed 00:06:37.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.015 Test: blockdev write read max offset ...passed 00:06:37.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.015 Test: blockdev writev readv 8 blocks ...passed 00:06:37.015 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.015 Test: blockdev writev readv block ...passed 00:06:37.015 Test: blockdev writev readv size > 128k ...passed 00:06:37.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.015 Test: blockdev comparev and writev ...[2024-11-03 04:27:00.058184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:37.015 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b9604000 len:0x1000 00:06:37.015 [2024-11-03 04:27:00.058318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:37.015 passed 00:06:37.015 Test: blockdev nvme passthru vendor specific ...passed 00:06:37.015 Test: blockdev nvme admin passthru ...[2024-11-03 04:27:00.059027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:37.015 [2024-11-03 04:27:00.059058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:37.015 passed 00:06:37.015 Test: blockdev copy ...passed 00:06:37.015 Suite: bdevio tests on: Nvme2n3 00:06:37.015 Test: blockdev write read block ...passed 00:06:37.015 Test: blockdev write zeroes read block ...passed 00:06:37.015 Test: blockdev write zeroes read no split ...passed 00:06:37.015 Test: blockdev write zeroes read split ...passed 00:06:37.275 Test: blockdev write zeroes read split partial ...passed 00:06:37.275 Test: blockdev reset ...[2024-11-03 04:27:00.116749] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:37.275 [2024-11-03 04:27:00.119794] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:37.275 passed 00:06:37.275 Test: blockdev write read 8 blocks ...passed 00:06:37.275 Test: blockdev write read size > 128k ...passed 00:06:37.275 Test: blockdev write read invalid size ...passed 00:06:37.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.275 Test: blockdev write read max offset ...passed 00:06:37.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.275 Test: blockdev writev readv 8 blocks ...passed 00:06:37.275 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.275 Test: blockdev writev readv block ...passed 00:06:37.275 Test: blockdev writev readv size > 128k ...passed 00:06:37.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.275 Test: blockdev comparev and writev ...[2024-11-03 04:27:00.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9602000 len:0x1000 00:06:37.275 [2024-11-03 04:27:00.127226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev nvme passthru rw ...passed 00:06:37.275 Test: blockdev nvme passthru vendor specific ...passed 00:06:37.275 Test: blockdev nvme admin passthru ...[2024-11-03 04:27:00.127842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:37.275 [2024-11-03 04:27:00.127871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev copy ...passed 00:06:37.275 Suite: bdevio tests on: Nvme2n2 00:06:37.275 Test: blockdev write read block ...passed 00:06:37.275 Test: blockdev write zeroes read block ...passed 00:06:37.275 Test: blockdev write zeroes read no split ...passed 00:06:37.275 Test: blockdev write zeroes read split ...passed 00:06:37.275 Test: blockdev write zeroes read split partial ...passed 00:06:37.275 Test: blockdev reset ...[2024-11-03 04:27:00.183169] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:37.275 [2024-11-03 04:27:00.186102] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:37.275 passed 00:06:37.275 Test: blockdev write read 8 blocks ...passed 00:06:37.275 Test: blockdev write read size > 128k ...passed 00:06:37.275 Test: blockdev write read invalid size ...passed 00:06:37.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.275 Test: blockdev write read max offset ...passed 00:06:37.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.275 Test: blockdev writev readv 8 blocks ...passed 00:06:37.275 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.275 Test: blockdev writev readv block ...passed 00:06:37.275 Test: blockdev writev readv size > 128k ...passed 00:06:37.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.275 Test: blockdev comparev and writev ...[2024-11-03 04:27:00.193213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dca38000 len:0x1000 00:06:37.275 [2024-11-03 04:27:00.193345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev nvme passthru rw ...passed 00:06:37.275 Test: blockdev nvme passthru vendor specific ...passed 00:06:37.275 Test: blockdev nvme admin passthru ...[2024-11-03 04:27:00.194035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:37.275 [2024-11-03 04:27:00.194096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev copy ...passed 00:06:37.275 Suite: bdevio tests on: Nvme2n1 00:06:37.275 Test: blockdev write read block ...passed 00:06:37.275 Test: blockdev write zeroes read block ...passed 00:06:37.275 Test: blockdev write zeroes read no split ...passed 00:06:37.275 Test: blockdev write zeroes read split ...passed 00:06:37.275 Test: blockdev write zeroes read split partial ...passed 00:06:37.275 Test: blockdev reset ...[2024-11-03 04:27:00.249295] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:37.275 [2024-11-03 04:27:00.252289] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:37.275 passed 00:06:37.275 Test: blockdev write read 8 blocks ...passed 00:06:37.275 Test: blockdev write read size > 128k ...passed 00:06:37.275 Test: blockdev write read invalid size ...passed 00:06:37.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.275 Test: blockdev write read max offset ...passed 00:06:37.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.275 Test: blockdev writev readv 8 blocks ...passed 00:06:37.275 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.275 Test: blockdev writev readv block ...passed 00:06:37.275 Test: blockdev writev readv size > 128k ...passed 00:06:37.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.275 Test: blockdev comparev and writev ...[2024-11-03 04:27:00.259389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dca34000 len:0x1000 00:06:37.275 [2024-11-03 04:27:00.259429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev nvme passthru rw ...passed 00:06:37.275 Test: blockdev nvme passthru vendor specific ...[2024-11-03 04:27:00.260052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:37.275 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:37.275 [2024-11-03 04:27:00.260145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev copy ...passed 00:06:37.275 Suite: bdevio tests on: Nvme1n1p2 00:06:37.275 Test: blockdev write read block ...passed 00:06:37.275 Test: blockdev write zeroes read block ...passed 00:06:37.275 Test: blockdev write zeroes read no split ...passed 00:06:37.275 Test: blockdev write zeroes read split ...passed 00:06:37.275 Test: blockdev write zeroes read split partial ...passed 00:06:37.275 Test: blockdev reset ...[2024-11-03 04:27:00.320147] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:37.275 [2024-11-03 04:27:00.322787] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:37.275 passed 00:06:37.275 Test: blockdev write read 8 blocks ...passed 00:06:37.275 Test: blockdev write read size > 128k ...passed 00:06:37.275 Test: blockdev write read invalid size ...passed 00:06:37.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.275 Test: blockdev write read max offset ...passed 00:06:37.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.275 Test: blockdev writev readv 8 blocks ...passed 00:06:37.275 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.275 Test: blockdev writev readv block ...passed 00:06:37.275 Test: blockdev writev readv size > 128k ...passed 00:06:37.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.275 Test: blockdev comparev and writev ...[2024-11-03 04:27:00.330590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dca30000 len:0x1000 00:06:37.275 [2024-11-03 04:27:00.330716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:37.275 passed 00:06:37.275 Test: blockdev nvme passthru rw ...passed 00:06:37.275 Test: blockdev nvme passthru vendor specific ...passed 00:06:37.275 Test: blockdev nvme admin passthru ...passed 00:06:37.275 Test: blockdev copy ...passed 00:06:37.275 Suite: bdevio tests on: Nvme1n1p1 00:06:37.275 Test: blockdev write read block ...passed 00:06:37.275 Test: blockdev write zeroes read block ...passed 00:06:37.275 Test: blockdev write zeroes read no split ...passed 00:06:37.275 Test: blockdev write zeroes read split ...passed 00:06:37.536 Test: blockdev write zeroes read split partial ...passed 00:06:37.536 Test: blockdev reset ...[2024-11-03 04:27:00.376313] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:37.536 [2024-11-03 04:27:00.379136] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:37.536 passed 00:06:37.536 Test: blockdev write read 8 blocks ...passed 00:06:37.536 Test: blockdev write read size > 128k ...passed 00:06:37.536 Test: blockdev write read invalid size ...passed 00:06:37.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.536 Test: blockdev write read max offset ...passed 00:06:37.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.536 Test: blockdev writev readv 8 blocks ...passed 00:06:37.536 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.536 Test: blockdev writev readv block ...passed 00:06:37.536 Test: blockdev writev readv size > 128k ...passed 00:06:37.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.536 Test: blockdev comparev and writev ...[2024-11-03 04:27:00.386850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b980e000 len:0x1000 00:06:37.536 [2024-11-03 04:27:00.386889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:37.536 passed 00:06:37.536 Test: blockdev nvme passthru rw ...passed 00:06:37.536 Test: blockdev nvme passthru vendor specific ...passed 00:06:37.536 Test: blockdev nvme admin passthru ...passed 00:06:37.536 Test: blockdev copy ...passed 00:06:37.536 Suite: bdevio tests on: Nvme0n1 00:06:37.536 Test: blockdev write read block ...passed 00:06:37.536 Test: blockdev write zeroes read block ...passed 00:06:37.536 Test: blockdev write zeroes read no split ...passed 00:06:37.536 Test: blockdev write zeroes read split ...passed 00:06:37.536 Test: blockdev write zeroes read split partial ...passed 00:06:37.536 Test: blockdev reset ...[2024-11-03 04:27:00.432294] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:37.536 [2024-11-03 04:27:00.437085] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:37.536 passed 00:06:37.536 Test: blockdev write read 8 blocks ...passed 00:06:37.536 Test: blockdev write read size > 128k ...passed 00:06:37.536 Test: blockdev write read invalid size ...passed 00:06:37.536 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:37.536 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:37.536 Test: blockdev write read max offset ...passed 00:06:37.536 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:37.536 Test: blockdev writev readv 8 blocks ...passed 00:06:37.536 Test: blockdev writev readv 30 x 1block ...passed 00:06:37.536 Test: blockdev writev readv block ...passed 00:06:37.536 Test: blockdev writev readv size > 128k ...passed 00:06:37.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:37.536 Test: blockdev comparev and writev ...passed[2024-11-03 04:27:00.444665] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:37.536 separate metadata which is not supported yet. 00:06:37.536 00:06:37.536 Test: blockdev nvme passthru rw ...passed 00:06:37.536 Test: blockdev nvme passthru vendor specific ...[2024-11-03 04:27:00.445263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:37.536 passed 00:06:37.536 Test: blockdev nvme admin passthru ...[2024-11-03 04:27:00.445297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:37.536 passed 00:06:37.536 Test: blockdev copy ...passed 00:06:37.536 00:06:37.536 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.536 suites 7 7 n/a 0 0 00:06:37.536 tests 161 161 161 0 0 00:06:37.536 asserts 1025 1025 1025 0 n/a 00:06:37.536 00:06:37.536 Elapsed time = 1.186 seconds 00:06:37.536 0 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61388 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61388 ']' 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61388 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61388 00:06:37.536 killing process with pid 61388 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61388' 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61388 00:06:37.536 04:27:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61388 00:06:38.477 ************************************ 00:06:38.477 END TEST bdev_bounds 00:06:38.477 ************************************ 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:38.477 00:06:38.477 real 0m2.180s 00:06:38.477 user 0m5.643s 00:06:38.477 sys 0m0.254s 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:38.477 04:27:01 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:38.477 04:27:01 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:38.477 04:27:01 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.477 04:27:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.477 ************************************ 00:06:38.477 START TEST bdev_nbd 00:06:38.477 ************************************ 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61442 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:38.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61442 /var/tmp/spdk-nbd.sock 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61442 ']' 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:38.477 04:27:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:38.477 [2024-11-03 04:27:01.320487] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:38.477 [2024-11-03 04:27:01.320861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.477 [2024-11-03 04:27:01.488088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.737 [2024-11-03 04:27:01.609152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.306 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.566 1+0 records in 00:06:39.566 1+0 records out 00:06:39.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677324 s, 6.0 MB/s 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.566 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.567 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.828 1+0 records in 00:06:39.828 1+0 records out 00:06:39.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424815 s, 9.6 MB/s 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:39.828 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:39.829 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.829 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.829 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.089 1+0 records in 00:06:40.089 1+0 records out 00:06:40.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107462 s, 3.8 MB/s 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:40.089 04:27:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.351 1+0 records in 00:06:40.351 1+0 records out 00:06:40.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789859 s, 5.2 MB/s 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:40.351 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.610 1+0 records in 00:06:40.610 1+0 records out 00:06:40.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011505 s, 3.6 MB/s 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:40.610 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.871 1+0 records in 00:06:40.871 1+0 records out 00:06:40.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000888169 s, 4.6 MB/s 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:40.871 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.133 1+0 records in 00:06:41.133 1+0 records out 00:06:41.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794005 s, 5.2 MB/s 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:41.133 04:27:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.133 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd0", 00:06:41.133 "bdev_name": "Nvme0n1" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd1", 00:06:41.133 "bdev_name": "Nvme1n1p1" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd2", 00:06:41.133 "bdev_name": "Nvme1n1p2" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd3", 00:06:41.133 "bdev_name": "Nvme2n1" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd4", 00:06:41.133 "bdev_name": "Nvme2n2" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd5", 00:06:41.133 "bdev_name": "Nvme2n3" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd6", 00:06:41.133 "bdev_name": "Nvme3n1" 00:06:41.133 } 00:06:41.133 ]' 00:06:41.133 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:41.133 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd0", 00:06:41.133 "bdev_name": "Nvme0n1" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd1", 00:06:41.133 "bdev_name": "Nvme1n1p1" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd2", 00:06:41.133 "bdev_name": "Nvme1n1p2" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd3", 00:06:41.133 "bdev_name": "Nvme2n1" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd4", 00:06:41.133 "bdev_name": "Nvme2n2" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd5", 00:06:41.133 "bdev_name": "Nvme2n3" 00:06:41.133 }, 00:06:41.133 { 00:06:41.133 "nbd_device": "/dev/nbd6", 00:06:41.133 "bdev_name": "Nvme3n1" 00:06:41.133 } 00:06:41.133 ]' 00:06:41.133 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.395 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.656 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.916 04:27:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.173 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.433 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.694 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.954 04:27:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.954 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.954 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.955 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.215 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:43.215 /dev/nbd0 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.475 1+0 records in 00:06:43.475 1+0 records out 00:06:43.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107144 s, 3.8 MB/s 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.475 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:43.475 /dev/nbd1 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.737 1+0 records in 00:06:43.737 1+0 records out 00:06:43.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116808 s, 3.5 MB/s 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:43.737 /dev/nbd10 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:43.737 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.997 1+0 records in 00:06:43.997 1+0 records out 00:06:43.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123065 s, 3.3 MB/s 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.997 04:27:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:43.997 /dev/nbd11 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:43.997 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.997 1+0 records in 00:06:43.997 1+0 records out 00:06:43.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034004 s, 12.0 MB/s 00:06:43.998 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:44.258 /dev/nbd12 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.258 1+0 records in 00:06:44.258 1+0 records out 00:06:44.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141419 s, 2.9 MB/s 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:44.258 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:44.520 /dev/nbd13 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.520 1+0 records in 00:06:44.520 1+0 records out 00:06:44.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751292 s, 5.5 MB/s 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:44.520 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:44.781 /dev/nbd14 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.781 1+0 records in 00:06:44.781 1+0 records out 00:06:44.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000959081 s, 4.3 MB/s 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.781 04:27:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd0", 00:06:45.080 "bdev_name": "Nvme0n1" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd1", 00:06:45.080 "bdev_name": "Nvme1n1p1" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd10", 00:06:45.080 "bdev_name": "Nvme1n1p2" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd11", 00:06:45.080 "bdev_name": "Nvme2n1" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd12", 00:06:45.080 "bdev_name": "Nvme2n2" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd13", 00:06:45.080 "bdev_name": "Nvme2n3" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd14", 00:06:45.080 "bdev_name": "Nvme3n1" 00:06:45.080 } 00:06:45.080 ]' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd0", 00:06:45.080 "bdev_name": "Nvme0n1" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd1", 00:06:45.080 "bdev_name": "Nvme1n1p1" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd10", 00:06:45.080 "bdev_name": "Nvme1n1p2" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd11", 00:06:45.080 "bdev_name": "Nvme2n1" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd12", 00:06:45.080 "bdev_name": "Nvme2n2" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd13", 00:06:45.080 "bdev_name": "Nvme2n3" 00:06:45.080 }, 00:06:45.080 { 00:06:45.080 "nbd_device": "/dev/nbd14", 00:06:45.080 "bdev_name": "Nvme3n1" 00:06:45.080 } 00:06:45.080 ]' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.080 /dev/nbd1 00:06:45.080 /dev/nbd10 00:06:45.080 /dev/nbd11 00:06:45.080 /dev/nbd12 00:06:45.080 /dev/nbd13 00:06:45.080 /dev/nbd14' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.080 /dev/nbd1 00:06:45.080 /dev/nbd10 00:06:45.080 /dev/nbd11 00:06:45.080 /dev/nbd12 00:06:45.080 /dev/nbd13 00:06:45.080 /dev/nbd14' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:45.080 256+0 records in 00:06:45.080 256+0 records out 00:06:45.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124214 s, 84.4 MB/s 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.080 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.340 256+0 records in 00:06:45.340 256+0 records out 00:06:45.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229919 s, 4.6 MB/s 00:06:45.340 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.341 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.601 256+0 records in 00:06:45.601 256+0 records out 00:06:45.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170313 s, 6.2 MB/s 00:06:45.601 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.601 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:45.601 256+0 records in 00:06:45.601 256+0 records out 00:06:45.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216939 s, 4.8 MB/s 00:06:45.601 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.602 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:45.867 256+0 records in 00:06:45.867 256+0 records out 00:06:45.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236223 s, 4.4 MB/s 00:06:45.867 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.867 04:27:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:46.128 256+0 records in 00:06:46.128 256+0 records out 00:06:46.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20697 s, 5.1 MB/s 00:06:46.129 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.129 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:46.389 256+0 records in 00:06:46.389 256+0 records out 00:06:46.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231886 s, 4.5 MB/s 00:06:46.389 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.389 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:46.649 256+0 records in 00:06:46.649 256+0 records out 00:06:46.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205449 s, 5.1 MB/s 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.649 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.908 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.908 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.908 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.908 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.908 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.909 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.909 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.909 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.909 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.909 04:27:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.167 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.426 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.686 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.947 04:27:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.206 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:48.463 malloc_lvol_verify 00:06:48.463 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:48.722 b9472cac-d940-4d5a-8217-430ea4ffe0d8 00:06:48.722 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:48.981 c33ac9fd-8bb3-4392-a27d-c06040599a54 00:06:48.981 04:27:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:49.239 /dev/nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:49.239 mke2fs 1.47.0 (5-Feb-2023) 00:06:49.239 Discarding device blocks: 0/4096 done 00:06:49.239 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:49.239 00:06:49.239 Allocating group tables: 0/1 done 00:06:49.239 Writing inode tables: 0/1 done 00:06:49.239 Creating journal (1024 blocks): done 00:06:49.239 Writing superblocks and filesystem accounting information: 0/1 done 00:06:49.239 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61442 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61442 ']' 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61442 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61442 00:06:49.239 killing process with pid 61442 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61442' 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61442 00:06:49.239 04:27:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61442 00:06:50.175 ************************************ 00:06:50.175 END TEST bdev_nbd 00:06:50.175 ************************************ 00:06:50.175 04:27:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:50.175 00:06:50.175 real 0m11.833s 00:06:50.175 user 0m16.064s 00:06:50.175 sys 0m4.010s 00:06:50.175 04:27:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.175 04:27:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:50.175 04:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:50.175 04:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:50.175 04:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:50.175 skipping fio tests on NVMe due to multi-ns failures. 00:06:50.175 04:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:50.175 04:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:50.175 04:27:13 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:50.175 04:27:13 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:50.175 04:27:13 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.175 04:27:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.175 ************************************ 00:06:50.175 START TEST bdev_verify 00:06:50.175 ************************************ 00:06:50.175 04:27:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:50.175 [2024-11-03 04:27:13.174174] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:50.175 [2024-11-03 04:27:13.174283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61867 ] 00:06:50.434 [2024-11-03 04:27:13.335172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.434 [2024-11-03 04:27:13.434680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.434 [2024-11-03 04:27:13.434787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.000 Running I/O for 5 seconds... 00:06:53.317 22016.00 IOPS, 86.00 MiB/s [2024-11-03T04:27:17.332Z] 23456.00 IOPS, 91.62 MiB/s [2024-11-03T04:27:18.704Z] 22613.33 IOPS, 88.33 MiB/s [2024-11-03T04:27:19.267Z] 21936.00 IOPS, 85.69 MiB/s [2024-11-03T04:27:19.267Z] 21606.40 IOPS, 84.40 MiB/s 00:06:56.183 Latency(us) 00:06:56.183 [2024-11-03T04:27:19.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.183 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.183 Verification LBA range: start 0x0 length 0xbd0bd 00:06:56.183 Nvme0n1 : 5.07 1514.30 5.92 0.00 0.00 84345.07 13913.80 77836.60 00:06:56.183 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.183 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:56.183 Nvme0n1 : 5.08 1536.36 6.00 0.00 0.00 82538.38 18350.08 65737.65 00:06:56.183 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.183 Verification LBA range: start 0x0 length 0x4ff80 00:06:56.183 Nvme1n1p1 : 5.07 1513.34 5.91 0.00 0.00 84245.06 16434.41 71383.83 00:06:56.184 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:56.184 Nvme1n1p1 : 5.09 1534.77 6.00 0.00 0.00 82418.61 17442.66 66140.95 00:06:56.184 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x0 length 0x4ff7f 00:06:56.184 Nvme1n1p2 : 5.08 1512.15 5.91 0.00 0.00 84109.31 18753.38 68157.44 00:06:56.184 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:56.184 Nvme1n1p2 : 5.09 1533.65 5.99 0.00 0.00 82277.10 13611.32 70173.93 00:06:56.184 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x0 length 0x80000 00:06:56.184 Nvme2n1 : 5.08 1511.10 5.90 0.00 0.00 83970.41 19660.80 65334.35 00:06:56.184 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x80000 length 0x80000 00:06:56.184 Nvme2n1 : 5.09 1532.70 5.99 0.00 0.00 82166.37 8822.15 74610.22 00:06:56.184 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x0 length 0x80000 00:06:56.184 Nvme2n2 : 5.09 1510.10 5.90 0.00 0.00 83841.85 18652.55 68157.44 00:06:56.184 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x80000 length 0x80000 00:06:56.184 Nvme2n2 : 5.07 1539.05 6.01 0.00 0.00 82971.79 14115.45 81062.99 00:06:56.184 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x0 length 0x80000 00:06:56.184 Nvme2n3 : 5.09 1509.67 5.90 0.00 0.00 83698.42 16333.59 70173.93 00:06:56.184 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x80000 length 0x80000 00:06:56.184 Nvme2n3 : 5.07 1538.60 6.01 0.00 0.00 82795.86 15930.29 72190.42 00:06:56.184 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x0 length 0x20000 00:06:56.184 Nvme3n1 : 5.09 1508.60 5.89 0.00 0.00 83579.08 8065.97 74610.22 00:06:56.184 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:56.184 Verification LBA range: start 0x20000 length 0x20000 00:06:56.184 Nvme3n1 : 5.08 1537.43 6.01 0.00 0.00 82664.26 17140.18 68964.04 00:06:56.184 [2024-11-03T04:27:19.268Z] =================================================================================================================== 00:06:56.184 [2024-11-03T04:27:19.268Z] Total : 21331.82 83.33 0.00 0.00 83252.81 8065.97 81062.99 00:06:57.555 00:06:57.555 real 0m7.217s 00:06:57.555 user 0m13.539s 00:06:57.555 sys 0m0.215s 00:06:57.555 ************************************ 00:06:57.555 END TEST bdev_verify 00:06:57.555 ************************************ 00:06:57.555 04:27:20 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.555 04:27:20 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 04:27:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:57.555 04:27:20 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:57.555 04:27:20 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.555 04:27:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.555 ************************************ 00:06:57.555 START TEST bdev_verify_big_io 00:06:57.555 ************************************ 00:06:57.555 04:27:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:57.555 [2024-11-03 04:27:20.438066] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:57.555 [2024-11-03 04:27:20.438590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61965 ] 00:06:57.555 [2024-11-03 04:27:20.596960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.813 [2024-11-03 04:27:20.697362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.813 [2024-11-03 04:27:20.697498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.379 Running I/O for 5 seconds... 00:07:03.548 1715.00 IOPS, 107.19 MiB/s [2024-11-03T04:27:27.566Z] 2236.50 IOPS, 139.78 MiB/s [2024-11-03T04:27:27.825Z] 2187.00 IOPS, 136.69 MiB/s 00:07:04.741 Latency(us) 00:07:04.741 [2024-11-03T04:27:27.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.741 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0xbd0b 00:07:04.741 Nvme0n1 : 6.06 85.09 5.32 0.00 0.00 1434743.22 11645.24 1729343.80 00:07:04.741 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:04.741 Nvme0n1 : 5.98 82.95 5.18 0.00 0.00 1317949.29 60898.07 2090699.22 00:07:04.741 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0x4ff8 00:07:04.741 Nvme1n1p1 : 5.98 82.92 5.18 0.00 0.00 1434616.78 100824.62 1568024.42 00:07:04.741 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:04.741 Nvme1n1p1 : 6.07 137.11 8.57 0.00 0.00 776962.36 44564.48 916294.10 00:07:04.741 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0x4ff7 00:07:04.741 Nvme1n1p2 : 6.12 86.81 5.43 0.00 0.00 1326569.78 81869.59 1935832.62 00:07:04.741 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:04.741 Nvme1n1p2 : 6.11 151.66 9.48 0.00 0.00 683233.87 932.63 1045349.61 00:07:04.741 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0x8000 00:07:04.741 Nvme2n1 : 6.12 86.80 5.43 0.00 0.00 1279187.15 82272.89 1677721.60 00:07:04.741 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x8000 length 0x8000 00:07:04.741 Nvme2n1 : 5.72 114.96 7.19 0.00 0.00 1061217.23 14821.22 1245385.65 00:07:04.741 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0x8000 00:07:04.741 Nvme2n2 : 6.12 95.28 5.95 0.00 0.00 1131721.03 49605.71 1251838.42 00:07:04.741 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x8000 length 0x8000 00:07:04.741 Nvme2n2 : 5.73 117.36 7.33 0.00 0.00 1015730.88 98001.53 1071160.71 00:07:04.741 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0x8000 00:07:04.741 Nvme2n3 : 6.15 101.02 6.31 0.00 0.00 1037047.33 15022.87 2142321.43 00:07:04.741 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x8000 length 0x8000 00:07:04.741 Nvme2n3 : 5.83 120.64 7.54 0.00 0.00 963011.04 98001.53 1058255.16 00:07:04.741 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x0 length 0x2000 00:07:04.741 Nvme3n1 : 6.22 131.76 8.23 0.00 0.00 772753.85 341.86 2206849.18 00:07:04.741 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:04.741 Verification LBA range: start 0x2000 length 0x2000 00:07:04.741 Nvme3n1 : 5.89 123.07 7.69 0.00 0.00 917340.73 60898.07 1277649.53 00:07:04.741 [2024-11-03T04:27:27.825Z] =================================================================================================================== 00:07:04.741 [2024-11-03T04:27:27.825Z] Total : 1517.41 94.84 0.00 0.00 1034949.67 341.86 2206849.18 00:07:06.115 00:07:06.115 real 0m8.780s 00:07:06.115 user 0m16.625s 00:07:06.115 sys 0m0.226s 00:07:06.115 04:27:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.115 ************************************ 00:07:06.115 END TEST bdev_verify_big_io 00:07:06.115 ************************************ 00:07:06.115 04:27:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:06.115 04:27:29 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.115 04:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:06.115 04:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.115 04:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.115 ************************************ 00:07:06.115 START TEST bdev_write_zeroes 00:07:06.115 ************************************ 00:07:06.115 04:27:29 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.373 [2024-11-03 04:27:29.259813] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:06.373 [2024-11-03 04:27:29.259924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62074 ] 00:07:06.373 [2024-11-03 04:27:29.411549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.631 [2024-11-03 04:27:29.513733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.196 Running I/O for 1 seconds... 00:07:08.182 35430.00 IOPS, 138.40 MiB/s 00:07:08.182 Latency(us) 00:07:08.182 [2024-11-03T04:27:31.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.182 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme0n1 : 1.02 5123.18 20.01 0.00 0.00 24945.31 10989.88 522674.81 00:07:08.182 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme1n1p1 : 1.03 5181.12 20.24 0.00 0.00 24616.01 10939.47 458147.05 00:07:08.182 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme1n1p2 : 1.03 5089.04 19.88 0.00 0.00 24985.72 10788.23 519448.42 00:07:08.182 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme2n1 : 1.03 5106.65 19.95 0.00 0.00 24865.38 10939.47 512995.64 00:07:08.182 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme2n2 : 1.03 5100.88 19.93 0.00 0.00 24831.89 10939.47 512995.64 00:07:08.182 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme2n3 : 1.03 5095.10 19.90 0.00 0.00 24788.71 10989.88 509769.26 00:07:08.182 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:08.182 Nvme3n1 : 1.03 5089.39 19.88 0.00 0.00 24769.34 10233.70 509769.26 00:07:08.182 [2024-11-03T04:27:31.266Z] =================================================================================================================== 00:07:08.182 [2024-11-03T04:27:31.266Z] Total : 35785.34 139.79 0.00 0.00 24828.43 10233.70 522674.81 00:07:09.115 00:07:09.115 real 0m2.677s 00:07:09.115 user 0m2.378s 00:07:09.115 sys 0m0.187s 00:07:09.115 04:27:31 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.115 04:27:31 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 ************************************ 00:07:09.115 END TEST bdev_write_zeroes 00:07:09.115 ************************************ 00:07:09.115 04:27:31 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.115 04:27:31 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:09.115 04:27:31 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.115 04:27:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.115 ************************************ 00:07:09.115 START TEST bdev_json_nonenclosed 00:07:09.115 ************************************ 00:07:09.115 04:27:31 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.115 [2024-11-03 04:27:31.981609] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:09.115 [2024-11-03 04:27:31.981720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:07:09.115 [2024-11-03 04:27:32.132901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.372 [2024-11-03 04:27:32.234470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.372 [2024-11-03 04:27:32.234546] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:09.372 [2024-11-03 04:27:32.234575] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:09.372 [2024-11-03 04:27:32.234585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.372 00:07:09.372 real 0m0.492s 00:07:09.372 user 0m0.292s 00:07:09.372 sys 0m0.097s 00:07:09.372 04:27:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.372 ************************************ 00:07:09.372 END TEST bdev_json_nonenclosed 00:07:09.372 ************************************ 00:07:09.372 04:27:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:09.373 04:27:32 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.373 04:27:32 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:09.373 04:27:32 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.373 04:27:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.373 ************************************ 00:07:09.373 START TEST bdev_json_nonarray 00:07:09.373 ************************************ 00:07:09.373 04:27:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.630 [2024-11-03 04:27:32.515955] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:09.630 [2024-11-03 04:27:32.516076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62147 ] 00:07:09.630 [2024-11-03 04:27:32.680705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.888 [2024-11-03 04:27:32.782087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.888 [2024-11-03 04:27:32.782172] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:09.888 [2024-11-03 04:27:32.782189] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:09.888 [2024-11-03 04:27:32.782198] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.888 00:07:09.888 real 0m0.509s 00:07:09.888 user 0m0.301s 00:07:09.888 sys 0m0.104s 00:07:09.888 04:27:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.888 04:27:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 ************************************ 00:07:09.888 END TEST bdev_json_nonarray 00:07:09.888 ************************************ 00:07:10.145 04:27:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:10.145 04:27:32 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:10.145 04:27:32 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:10.145 04:27:32 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.145 04:27:32 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.146 04:27:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.146 ************************************ 00:07:10.146 START TEST bdev_gpt_uuid 00:07:10.146 ************************************ 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62178 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62178 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62178 ']' 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:10.146 04:27:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.146 04:27:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:10.146 [2024-11-03 04:27:33.073968] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:10.146 [2024-11-03 04:27:33.074085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62178 ] 00:07:10.403 [2024-11-03 04:27:33.231183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.403 [2024-11-03 04:27:33.333453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.969 04:27:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.969 04:27:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:07:10.969 04:27:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:10.969 04:27:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.969 04:27:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:11.228 Some configs were skipped because the RPC state that can call them passed over. 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:11.228 { 00:07:11.228 "name": "Nvme1n1p1", 00:07:11.228 "aliases": [ 00:07:11.228 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:11.228 ], 00:07:11.228 "product_name": "GPT Disk", 00:07:11.228 "block_size": 4096, 00:07:11.228 "num_blocks": 655104, 00:07:11.228 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:11.228 "assigned_rate_limits": { 00:07:11.228 "rw_ios_per_sec": 0, 00:07:11.228 "rw_mbytes_per_sec": 0, 00:07:11.228 "r_mbytes_per_sec": 0, 00:07:11.228 "w_mbytes_per_sec": 0 00:07:11.228 }, 00:07:11.228 "claimed": false, 00:07:11.228 "zoned": false, 00:07:11.228 "supported_io_types": { 00:07:11.228 "read": true, 00:07:11.228 "write": true, 00:07:11.228 "unmap": true, 00:07:11.228 "flush": true, 00:07:11.228 "reset": true, 00:07:11.228 "nvme_admin": false, 00:07:11.228 "nvme_io": false, 00:07:11.228 "nvme_io_md": false, 00:07:11.228 "write_zeroes": true, 00:07:11.228 "zcopy": false, 00:07:11.228 "get_zone_info": false, 00:07:11.228 "zone_management": false, 00:07:11.228 "zone_append": false, 00:07:11.228 "compare": true, 00:07:11.228 "compare_and_write": false, 00:07:11.228 "abort": true, 00:07:11.228 "seek_hole": false, 00:07:11.228 "seek_data": false, 00:07:11.228 "copy": true, 00:07:11.228 "nvme_iov_md": false 00:07:11.228 }, 00:07:11.228 "driver_specific": { 00:07:11.228 "gpt": { 00:07:11.228 "base_bdev": "Nvme1n1", 00:07:11.228 "offset_blocks": 256, 00:07:11.228 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:11.228 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:11.228 "partition_name": "SPDK_TEST_first" 00:07:11.228 } 00:07:11.228 } 00:07:11.228 } 00:07:11.228 ]' 00:07:11.228 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:11.486 { 00:07:11.486 "name": "Nvme1n1p2", 00:07:11.486 "aliases": [ 00:07:11.486 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:11.486 ], 00:07:11.486 "product_name": "GPT Disk", 00:07:11.486 "block_size": 4096, 00:07:11.486 "num_blocks": 655103, 00:07:11.486 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:11.486 "assigned_rate_limits": { 00:07:11.486 "rw_ios_per_sec": 0, 00:07:11.486 "rw_mbytes_per_sec": 0, 00:07:11.486 "r_mbytes_per_sec": 0, 00:07:11.486 "w_mbytes_per_sec": 0 00:07:11.486 }, 00:07:11.486 "claimed": false, 00:07:11.486 "zoned": false, 00:07:11.486 "supported_io_types": { 00:07:11.486 "read": true, 00:07:11.486 "write": true, 00:07:11.486 "unmap": true, 00:07:11.486 "flush": true, 00:07:11.486 "reset": true, 00:07:11.486 "nvme_admin": false, 00:07:11.486 "nvme_io": false, 00:07:11.486 "nvme_io_md": false, 00:07:11.486 "write_zeroes": true, 00:07:11.486 "zcopy": false, 00:07:11.486 "get_zone_info": false, 00:07:11.486 "zone_management": false, 00:07:11.486 "zone_append": false, 00:07:11.486 "compare": true, 00:07:11.486 "compare_and_write": false, 00:07:11.486 "abort": true, 00:07:11.486 "seek_hole": false, 00:07:11.486 "seek_data": false, 00:07:11.486 "copy": true, 00:07:11.486 "nvme_iov_md": false 00:07:11.486 }, 00:07:11.486 "driver_specific": { 00:07:11.486 "gpt": { 00:07:11.486 "base_bdev": "Nvme1n1", 00:07:11.486 "offset_blocks": 655360, 00:07:11.486 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:11.486 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:11.486 "partition_name": "SPDK_TEST_second" 00:07:11.486 } 00:07:11.486 } 00:07:11.486 } 00:07:11.486 ]' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62178 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62178 ']' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62178 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62178 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:11.486 killing process with pid 62178 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62178' 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62178 00:07:11.486 04:27:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62178 00:07:13.385 00:07:13.385 real 0m2.973s 00:07:13.385 user 0m3.085s 00:07:13.385 sys 0m0.370s 00:07:13.385 04:27:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.385 ************************************ 00:07:13.385 END TEST bdev_gpt_uuid 00:07:13.385 ************************************ 00:07:13.385 04:27:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:13.385 04:27:36 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:13.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.385 Waiting for block devices as requested 00:07:13.683 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:13.684 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:13.684 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:13.684 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:18.972 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:18.972 04:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:18.972 04:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:18.972 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:18.972 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:18.972 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:18.972 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:18.972 04:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:18.972 00:07:18.972 real 0m56.295s 00:07:18.972 user 1m11.893s 00:07:18.972 sys 0m8.033s 00:07:18.972 04:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.972 04:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.972 ************************************ 00:07:18.972 END TEST blockdev_nvme_gpt 00:07:18.972 ************************************ 00:07:18.972 04:27:41 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:18.972 04:27:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.972 04:27:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.972 04:27:41 -- common/autotest_common.sh@10 -- # set +x 00:07:18.972 ************************************ 00:07:18.972 START TEST nvme 00:07:18.972 ************************************ 00:07:18.972 04:27:42 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:19.229 * Looking for test storage... 00:07:19.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.229 04:27:42 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.229 04:27:42 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.229 04:27:42 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.229 04:27:42 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.229 04:27:42 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.229 04:27:42 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:19.229 04:27:42 nvme -- scripts/common.sh@345 -- # : 1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.229 04:27:42 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.229 04:27:42 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@353 -- # local d=1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.229 04:27:42 nvme -- scripts/common.sh@355 -- # echo 1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.229 04:27:42 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@353 -- # local d=2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.229 04:27:42 nvme -- scripts/common.sh@355 -- # echo 2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.229 04:27:42 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.229 04:27:42 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.229 04:27:42 nvme -- scripts/common.sh@368 -- # return 0 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.229 --rc genhtml_branch_coverage=1 00:07:19.229 --rc genhtml_function_coverage=1 00:07:19.229 --rc genhtml_legend=1 00:07:19.229 --rc geninfo_all_blocks=1 00:07:19.229 --rc geninfo_unexecuted_blocks=1 00:07:19.229 00:07:19.229 ' 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.229 --rc genhtml_branch_coverage=1 00:07:19.229 --rc genhtml_function_coverage=1 00:07:19.229 --rc genhtml_legend=1 00:07:19.229 --rc geninfo_all_blocks=1 00:07:19.229 --rc geninfo_unexecuted_blocks=1 00:07:19.229 00:07:19.229 ' 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.229 --rc genhtml_branch_coverage=1 00:07:19.229 --rc genhtml_function_coverage=1 00:07:19.229 --rc genhtml_legend=1 00:07:19.229 --rc geninfo_all_blocks=1 00:07:19.229 --rc geninfo_unexecuted_blocks=1 00:07:19.229 00:07:19.229 ' 00:07:19.229 04:27:42 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.229 --rc genhtml_branch_coverage=1 00:07:19.229 --rc genhtml_function_coverage=1 00:07:19.229 --rc genhtml_legend=1 00:07:19.229 --rc geninfo_all_blocks=1 00:07:19.229 --rc geninfo_unexecuted_blocks=1 00:07:19.229 00:07:19.229 ' 00:07:19.229 04:27:42 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:19.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:20.050 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.050 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.050 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.050 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.308 04:27:43 nvme -- nvme/nvme.sh@79 -- # uname 00:07:20.308 04:27:43 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:20.308 04:27:43 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:20.308 04:27:43 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1073 -- # stubpid=62814 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:20.308 Waiting for stub to ready for secondary processes... 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62814 ]] 00:07:20.308 04:27:43 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:07:20.308 [2024-11-03 04:27:43.181136] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:20.308 [2024-11-03 04:27:43.181254] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:21.239 [2024-11-03 04:27:43.961103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.239 [2024-11-03 04:27:44.058069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.239 [2024-11-03 04:27:44.058152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.239 [2024-11-03 04:27:44.058217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.239 [2024-11-03 04:27:44.071688] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:21.239 [2024-11-03 04:27:44.071725] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:21.239 [2024-11-03 04:27:44.079816] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:21.239 [2024-11-03 04:27:44.079932] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:21.239 [2024-11-03 04:27:44.081596] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:21.239 [2024-11-03 04:27:44.081730] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:21.239 [2024-11-03 04:27:44.081774] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:21.239 [2024-11-03 04:27:44.083502] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:21.239 [2024-11-03 04:27:44.083677] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:21.239 [2024-11-03 04:27:44.083748] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:21.239 [2024-11-03 04:27:44.086976] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:21.239 [2024-11-03 04:27:44.087128] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:21.239 [2024-11-03 04:27:44.087189] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:21.239 [2024-11-03 04:27:44.087234] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:21.239 [2024-11-03 04:27:44.087272] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:21.239 done. 00:07:21.239 04:27:44 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:21.239 04:27:44 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:07:21.239 04:27:44 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:21.239 04:27:44 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:07:21.239 04:27:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.239 04:27:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.239 ************************************ 00:07:21.239 START TEST nvme_reset 00:07:21.239 ************************************ 00:07:21.239 04:27:44 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:21.497 Initializing NVMe Controllers 00:07:21.497 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:21.497 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:21.497 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:21.497 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:21.497 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:21.497 00:07:21.497 real 0m0.199s 00:07:21.497 user 0m0.071s 00:07:21.497 sys 0m0.084s 00:07:21.497 ************************************ 00:07:21.497 END TEST nvme_reset 00:07:21.497 ************************************ 00:07:21.497 04:27:44 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.497 04:27:44 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:21.497 04:27:44 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:21.497 04:27:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.497 04:27:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.497 04:27:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.497 ************************************ 00:07:21.497 START TEST nvme_identify 00:07:21.497 ************************************ 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:07:21.497 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:21.497 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:21.497 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:21.497 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:21.497 04:27:44 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:21.497 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:21.758 [2024-11-03 04:27:44.634820] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62835 terminated unexpected 00:07:21.758 ===================================================== 00:07:21.758 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:21.758 ===================================================== 00:07:21.758 Controller Capabilities/Features 00:07:21.758 ================================ 00:07:21.758 Vendor ID: 1b36 00:07:21.758 Subsystem Vendor ID: 1af4 00:07:21.758 Serial Number: 12340 00:07:21.758 Model Number: QEMU NVMe Ctrl 00:07:21.758 Firmware Version: 8.0.0 00:07:21.758 Recommended Arb Burst: 6 00:07:21.758 IEEE OUI Identifier: 00 54 52 00:07:21.758 Multi-path I/O 00:07:21.758 May have multiple subsystem ports: No 00:07:21.758 May have multiple controllers: No 00:07:21.758 Associated with SR-IOV VF: No 00:07:21.758 Max Data Transfer Size: 524288 00:07:21.758 Max Number of Namespaces: 256 00:07:21.758 Max Number of I/O Queues: 64 00:07:21.758 NVMe Specification Version (VS): 1.4 00:07:21.758 NVMe Specification Version (Identify): 1.4 00:07:21.758 Maximum Queue Entries: 2048 00:07:21.758 Contiguous Queues Required: Yes 00:07:21.758 Arbitration Mechanisms Supported 00:07:21.758 Weighted Round Robin: Not Supported 00:07:21.758 Vendor Specific: Not Supported 00:07:21.758 Reset Timeout: 7500 ms 00:07:21.758 Doorbell Stride: 4 bytes 00:07:21.758 NVM Subsystem Reset: Not Supported 00:07:21.758 Command Sets Supported 00:07:21.758 NVM Command Set: Supported 00:07:21.758 Boot Partition: Not Supported 00:07:21.758 Memory Page Size Minimum: 4096 bytes 00:07:21.759 Memory Page Size Maximum: 65536 bytes 00:07:21.759 Persistent Memory Region: Not Supported 00:07:21.759 Optional Asynchronous Events Supported 00:07:21.759 Namespace Attribute Notices: Supported 00:07:21.759 Firmware Activation Notices: Not Supported 00:07:21.759 ANA Change Notices: Not Supported 00:07:21.759 PLE Aggregate Log Change Notices: Not Supported 00:07:21.759 LBA Status Info Alert Notices: Not Supported 00:07:21.759 EGE Aggregate Log Change Notices: Not Supported 00:07:21.759 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.759 Zone Descriptor Change Notices: Not Supported 00:07:21.759 Discovery Log Change Notices: Not Supported 00:07:21.759 Controller Attributes 00:07:21.759 128-bit Host Identifier: Not Supported 00:07:21.759 Non-Operational Permissive Mode: Not Supported 00:07:21.759 NVM Sets: Not Supported 00:07:21.759 Read Recovery Levels: Not Supported 00:07:21.759 Endurance Groups: Not Supported 00:07:21.759 Predictable Latency Mode: Not Supported 00:07:21.759 Traffic Based Keep ALive: Not Supported 00:07:21.759 Namespace Granularity: Not Supported 00:07:21.759 SQ Associations: Not Supported 00:07:21.759 UUID List: Not Supported 00:07:21.759 Multi-Domain Subsystem: Not Supported 00:07:21.759 Fixed Capacity Management: Not Supported 00:07:21.759 Variable Capacity Management: Not Supported 00:07:21.759 Delete Endurance Group: Not Supported 00:07:21.759 Delete NVM Set: Not Supported 00:07:21.759 Extended LBA Formats Supported: Supported 00:07:21.759 Flexible Data Placement Supported: Not Supported 00:07:21.759 00:07:21.759 Controller Memory Buffer Support 00:07:21.759 ================================ 00:07:21.759 Supported: No 00:07:21.759 00:07:21.759 Persistent Memory Region Support 00:07:21.759 ================================ 00:07:21.759 Supported: No 00:07:21.759 00:07:21.759 Admin Command Set Attributes 00:07:21.759 ============================ 00:07:21.759 Security Send/Receive: Not Supported 00:07:21.759 Format NVM: Supported 00:07:21.759 Firmware Activate/Download: Not Supported 00:07:21.759 Namespace Management: Supported 00:07:21.759 Device Self-Test: Not Supported 00:07:21.759 Directives: Supported 00:07:21.759 NVMe-MI: Not Supported 00:07:21.759 Virtualization Management: Not Supported 00:07:21.759 Doorbell Buffer Config: Supported 00:07:21.759 Get LBA Status Capability: Not Supported 00:07:21.759 Command & Feature Lockdown Capability: Not Supported 00:07:21.759 Abort Command Limit: 4 00:07:21.759 Async Event Request Limit: 4 00:07:21.759 Number of Firmware Slots: N/A 00:07:21.759 Firmware Slot 1 Read-Only: N/A 00:07:21.759 Firmware Activation Without Reset: N/A 00:07:21.759 Multiple Update Detection Support: N/A 00:07:21.759 Firmware Update Granularity: No Information Provided 00:07:21.759 Per-Namespace SMART Log: Yes 00:07:21.759 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.759 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:21.759 Command Effects Log Page: Supported 00:07:21.759 Get Log Page Extended Data: Supported 00:07:21.759 Telemetry Log Pages: Not Supported 00:07:21.759 Persistent Event Log Pages: Not Supported 00:07:21.759 Supported Log Pages Log Page: May Support 00:07:21.759 Commands Supported & Effects Log Page: Not Supported 00:07:21.759 Feature Identifiers & Effects Log Page:May Support 00:07:21.759 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.759 Data Area 4 for Telemetry Log: Not Supported 00:07:21.759 Error Log Page Entries Supported: 1 00:07:21.759 Keep Alive: Not Supported 00:07:21.759 00:07:21.759 NVM Command Set Attributes 00:07:21.759 ========================== 00:07:21.759 Submission Queue Entry Size 00:07:21.759 Max: 64 00:07:21.759 Min: 64 00:07:21.759 Completion Queue Entry Size 00:07:21.759 Max: 16 00:07:21.759 Min: 16 00:07:21.759 Number of Namespaces: 256 00:07:21.759 Compare Command: Supported 00:07:21.759 Write Uncorrectable Command: Not Supported 00:07:21.759 Dataset Management Command: Supported 00:07:21.759 Write Zeroes Command: Supported 00:07:21.759 Set Features Save Field: Supported 00:07:21.759 Reservations: Not Supported 00:07:21.759 Timestamp: Supported 00:07:21.759 Copy: Supported 00:07:21.759 Volatile Write Cache: Present 00:07:21.759 Atomic Write Unit (Normal): 1 00:07:21.759 Atomic Write Unit (PFail): 1 00:07:21.759 Atomic Compare & Write Unit: 1 00:07:21.759 Fused Compare & Write: Not Supported 00:07:21.759 Scatter-Gather List 00:07:21.759 SGL Command Set: Supported 00:07:21.759 SGL Keyed: Not Supported 00:07:21.759 SGL Bit Bucket Descriptor: Not Supported 00:07:21.759 SGL Metadata Pointer: Not Supported 00:07:21.759 Oversized SGL: Not Supported 00:07:21.759 SGL Metadata Address: Not Supported 00:07:21.759 SGL Offset: Not Supported 00:07:21.759 Transport SGL Data Block: Not Supported 00:07:21.759 Replay Protected Memory Block: Not Supported 00:07:21.759 00:07:21.759 Firmware Slot Information 00:07:21.759 ========================= 00:07:21.759 Active slot: 1 00:07:21.759 Slot 1 Firmware Revision: 1.0 00:07:21.759 00:07:21.759 00:07:21.759 Commands Supported and Effects 00:07:21.759 ============================== 00:07:21.759 Admin Commands 00:07:21.759 -------------- 00:07:21.759 Delete I/O Submission Queue (00h): Supported 00:07:21.759 Create I/O Submission Queue (01h): Supported 00:07:21.759 Get Log Page (02h): Supported 00:07:21.759 Delete I/O Completion Queue (04h): Supported 00:07:21.759 Create I/O Completion Queue (05h): Supported 00:07:21.759 Identify (06h): Supported 00:07:21.759 Abort (08h): Supported 00:07:21.759 Set Features (09h): Supported 00:07:21.759 Get Features (0Ah): Supported 00:07:21.759 Asynchronous Event Request (0Ch): Supported 00:07:21.759 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.759 Directive Send (19h): Supported 00:07:21.759 Directive Receive (1Ah): Supported 00:07:21.759 Virtualization Management (1Ch): Supported 00:07:21.759 Doorbell Buffer Config (7Ch): Supported 00:07:21.759 Format NVM (80h): Supported LBA-Change 00:07:21.759 I/O Commands 00:07:21.759 ------------ 00:07:21.759 Flush (00h): Supported LBA-Change 00:07:21.759 Write (01h): Supported LBA-Change 00:07:21.759 Read (02h): Supported 00:07:21.759 Compare (05h): Supported 00:07:21.759 Write Zeroes (08h): Supported LBA-Change 00:07:21.759 Dataset Management (09h): Supported LBA-Change 00:07:21.759 Unknown (0Ch): Supported 00:07:21.759 Unknown (12h): Supported 00:07:21.759 Copy (19h): Supported LBA-Change 00:07:21.759 Unknown (1Dh): Supported LBA-Change 00:07:21.759 00:07:21.759 Error Log 00:07:21.759 ========= 00:07:21.759 00:07:21.759 Arbitration 00:07:21.759 =========== 00:07:21.759 Arbitration Burst: no limit 00:07:21.759 00:07:21.759 Power Management 00:07:21.759 ================ 00:07:21.759 Number of Power States: 1 00:07:21.759 Current Power State: Power State #0 00:07:21.759 Power State #0: 00:07:21.759 Max Power: 25.00 W 00:07:21.759 Non-Operational State: Operational 00:07:21.759 Entry Latency: 16 microseconds 00:07:21.759 Exit Latency: 4 microseconds 00:07:21.759 Relative Read Throughput: 0 00:07:21.759 Relative Read Latency: 0 00:07:21.759 Relative Write Throughput: 0 00:07:21.759 Relative Write Latency: 0 00:07:21.759 Idle Power[2024-11-03 04:27:44.636059] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62835 terminated unexpected 00:07:21.759 : Not Reported 00:07:21.759 Active Power: Not Reported 00:07:21.759 Non-Operational Permissive Mode: Not Supported 00:07:21.759 00:07:21.759 Health Information 00:07:21.759 ================== 00:07:21.759 Critical Warnings: 00:07:21.759 Available Spare Space: OK 00:07:21.759 Temperature: OK 00:07:21.759 Device Reliability: OK 00:07:21.759 Read Only: No 00:07:21.759 Volatile Memory Backup: OK 00:07:21.759 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.759 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.759 Available Spare: 0% 00:07:21.759 Available Spare Threshold: 0% 00:07:21.759 Life Percentage Used: 0% 00:07:21.759 Data Units Read: 642 00:07:21.759 Data Units Written: 570 00:07:21.759 Host Read Commands: 36836 00:07:21.759 Host Write Commands: 36622 00:07:21.759 Controller Busy Time: 0 minutes 00:07:21.759 Power Cycles: 0 00:07:21.759 Power On Hours: 0 hours 00:07:21.759 Unsafe Shutdowns: 0 00:07:21.759 Unrecoverable Media Errors: 0 00:07:21.759 Lifetime Error Log Entries: 0 00:07:21.759 Warning Temperature Time: 0 minutes 00:07:21.759 Critical Temperature Time: 0 minutes 00:07:21.759 00:07:21.760 Number of Queues 00:07:21.760 ================ 00:07:21.760 Number of I/O Submission Queues: 64 00:07:21.760 Number of I/O Completion Queues: 64 00:07:21.760 00:07:21.760 ZNS Specific Controller Data 00:07:21.760 ============================ 00:07:21.760 Zone Append Size Limit: 0 00:07:21.760 00:07:21.760 00:07:21.760 Active Namespaces 00:07:21.760 ================= 00:07:21.760 Namespace ID:1 00:07:21.760 Error Recovery Timeout: Unlimited 00:07:21.760 Command Set Identifier: NVM (00h) 00:07:21.760 Deallocate: Supported 00:07:21.760 Deallocated/Unwritten Error: Supported 00:07:21.760 Deallocated Read Value: All 0x00 00:07:21.760 Deallocate in Write Zeroes: Not Supported 00:07:21.760 Deallocated Guard Field: 0xFFFF 00:07:21.760 Flush: Supported 00:07:21.760 Reservation: Not Supported 00:07:21.760 Metadata Transferred as: Separate Metadata Buffer 00:07:21.760 Namespace Sharing Capabilities: Private 00:07:21.760 Size (in LBAs): 1548666 (5GiB) 00:07:21.760 Capacity (in LBAs): 1548666 (5GiB) 00:07:21.760 Utilization (in LBAs): 1548666 (5GiB) 00:07:21.760 Thin Provisioning: Not Supported 00:07:21.760 Per-NS Atomic Units: No 00:07:21.760 Maximum Single Source Range Length: 128 00:07:21.760 Maximum Copy Length: 128 00:07:21.760 Maximum Source Range Count: 128 00:07:21.760 NGUID/EUI64 Never Reused: No 00:07:21.760 Namespace Write Protected: No 00:07:21.760 Number of LBA Formats: 8 00:07:21.760 Current LBA Format: LBA Format #07 00:07:21.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.760 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.760 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.760 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.760 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.760 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.760 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.760 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.760 00:07:21.760 NVM Specific Namespace Data 00:07:21.760 =========================== 00:07:21.760 Logical Block Storage Tag Mask: 0 00:07:21.760 Protection Information Capabilities: 00:07:21.760 16b Guard Protection Information Storage Tag Support: No 00:07:21.760 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.760 Storage Tag Check Read Support: No 00:07:21.760 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.760 ===================================================== 00:07:21.760 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:21.760 ===================================================== 00:07:21.760 Controller Capabilities/Features 00:07:21.760 ================================ 00:07:21.760 Vendor ID: 1b36 00:07:21.760 Subsystem Vendor ID: 1af4 00:07:21.760 Serial Number: 12341 00:07:21.760 Model Number: QEMU NVMe Ctrl 00:07:21.760 Firmware Version: 8.0.0 00:07:21.760 Recommended Arb Burst: 6 00:07:21.760 IEEE OUI Identifier: 00 54 52 00:07:21.760 Multi-path I/O 00:07:21.760 May have multiple subsystem ports: No 00:07:21.760 May have multiple controllers: No 00:07:21.760 Associated with SR-IOV VF: No 00:07:21.760 Max Data Transfer Size: 524288 00:07:21.760 Max Number of Namespaces: 256 00:07:21.760 Max Number of I/O Queues: 64 00:07:21.760 NVMe Specification Version (VS): 1.4 00:07:21.760 NVMe Specification Version (Identify): 1.4 00:07:21.760 Maximum Queue Entries: 2048 00:07:21.760 Contiguous Queues Required: Yes 00:07:21.760 Arbitration Mechanisms Supported 00:07:21.760 Weighted Round Robin: Not Supported 00:07:21.760 Vendor Specific: Not Supported 00:07:21.760 Reset Timeout: 7500 ms 00:07:21.760 Doorbell Stride: 4 bytes 00:07:21.760 NVM Subsystem Reset: Not Supported 00:07:21.760 Command Sets Supported 00:07:21.760 NVM Command Set: Supported 00:07:21.760 Boot Partition: Not Supported 00:07:21.760 Memory Page Size Minimum: 4096 bytes 00:07:21.760 Memory Page Size Maximum: 65536 bytes 00:07:21.760 Persistent Memory Region: Not Supported 00:07:21.760 Optional Asynchronous Events Supported 00:07:21.760 Namespace Attribute Notices: Supported 00:07:21.760 Firmware Activation Notices: Not Supported 00:07:21.760 ANA Change Notices: Not Supported 00:07:21.760 PLE Aggregate Log Change Notices: Not Supported 00:07:21.760 LBA Status Info Alert Notices: Not Supported 00:07:21.760 EGE Aggregate Log Change Notices: Not Supported 00:07:21.760 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.760 Zone Descriptor Change Notices: Not Supported 00:07:21.760 Discovery Log Change Notices: Not Supported 00:07:21.760 Controller Attributes 00:07:21.760 128-bit Host Identifier: Not Supported 00:07:21.760 Non-Operational Permissive Mode: Not Supported 00:07:21.760 NVM Sets: Not Supported 00:07:21.760 Read Recovery Levels: Not Supported 00:07:21.760 Endurance Groups: Not Supported 00:07:21.760 Predictable Latency Mode: Not Supported 00:07:21.760 Traffic Based Keep ALive: Not Supported 00:07:21.760 Namespace Granularity: Not Supported 00:07:21.760 SQ Associations: Not Supported 00:07:21.760 UUID List: Not Supported 00:07:21.760 Multi-Domain Subsystem: Not Supported 00:07:21.760 Fixed Capacity Management: Not Supported 00:07:21.760 Variable Capacity Management: Not Supported 00:07:21.760 Delete Endurance Group: Not Supported 00:07:21.760 Delete NVM Set: Not Supported 00:07:21.760 Extended LBA Formats Supported: Supported 00:07:21.760 Flexible Data Placement Supported: Not Supported 00:07:21.760 00:07:21.760 Controller Memory Buffer Support 00:07:21.760 ================================ 00:07:21.760 Supported: No 00:07:21.760 00:07:21.760 Persistent Memory Region Support 00:07:21.760 ================================ 00:07:21.760 Supported: No 00:07:21.760 00:07:21.760 Admin Command Set Attributes 00:07:21.760 ============================ 00:07:21.760 Security Send/Receive: Not Supported 00:07:21.760 Format NVM: Supported 00:07:21.760 Firmware Activate/Download: Not Supported 00:07:21.760 Namespace Management: Supported 00:07:21.760 Device Self-Test: Not Supported 00:07:21.760 Directives: Supported 00:07:21.760 NVMe-MI: Not Supported 00:07:21.760 Virtualization Management: Not Supported 00:07:21.760 Doorbell Buffer Config: Supported 00:07:21.760 Get LBA Status Capability: Not Supported 00:07:21.760 Command & Feature Lockdown Capability: Not Supported 00:07:21.760 Abort Command Limit: 4 00:07:21.760 Async Event Request Limit: 4 00:07:21.760 Number of Firmware Slots: N/A 00:07:21.760 Firmware Slot 1 Read-Only: N/A 00:07:21.760 Firmware Activation Without Reset: N/A 00:07:21.760 Multiple Update Detection Support: N/A 00:07:21.760 Firmware Update Granularity: No Information Provided 00:07:21.760 Per-Namespace SMART Log: Yes 00:07:21.760 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.760 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:21.760 Command Effects Log Page: Supported 00:07:21.760 Get Log Page Extended Data: Supported 00:07:21.760 Telemetry Log Pages: Not Supported 00:07:21.760 Persistent Event Log Pages: Not Supported 00:07:21.760 Supported Log Pages Log Page: May Support 00:07:21.760 Commands Supported & Effects Log Page: Not Supported 00:07:21.760 Feature Identifiers & Effects Log Page:May Support 00:07:21.760 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.760 Data Area 4 for Telemetry Log: Not Supported 00:07:21.760 Error Log Page Entries Supported: 1 00:07:21.760 Keep Alive: Not Supported 00:07:21.760 00:07:21.760 NVM Command Set Attributes 00:07:21.760 ========================== 00:07:21.760 Submission Queue Entry Size 00:07:21.760 Max: 64 00:07:21.760 Min: 64 00:07:21.760 Completion Queue Entry Size 00:07:21.760 Max: 16 00:07:21.760 Min: 16 00:07:21.760 Number of Namespaces: 256 00:07:21.760 Compare Command: Supported 00:07:21.760 Write Uncorrectable Command: Not Supported 00:07:21.760 Dataset Management Command: Supported 00:07:21.760 Write Zeroes Command: Supported 00:07:21.760 Set Features Save Field: Supported 00:07:21.760 Reservations: Not Supported 00:07:21.760 Timestamp: Supported 00:07:21.760 Copy: Supported 00:07:21.760 Volatile Write Cache: Present 00:07:21.760 Atomic Write Unit (Normal): 1 00:07:21.761 Atomic Write Unit (PFail): 1 00:07:21.761 Atomic Compare & Write Unit: 1 00:07:21.761 Fused Compare & Write: Not Supported 00:07:21.761 Scatter-Gather List 00:07:21.761 SGL Command Set: Supported 00:07:21.761 SGL Keyed: Not Supported 00:07:21.761 SGL Bit Bucket Descriptor: Not Supported 00:07:21.761 SGL Metadata Pointer: Not Supported 00:07:21.761 Oversized SGL: Not Supported 00:07:21.761 SGL Metadata Address: Not Supported 00:07:21.761 SGL Offset: Not Supported 00:07:21.761 Transport SGL Data Block: Not Supported 00:07:21.761 Replay Protected Memory Block: Not Supported 00:07:21.761 00:07:21.761 Firmware Slot Information 00:07:21.761 ========================= 00:07:21.761 Active slot: 1 00:07:21.761 Slot 1 Firmware Revision: 1.0 00:07:21.761 00:07:21.761 00:07:21.761 Commands Supported and Effects 00:07:21.761 ============================== 00:07:21.761 Admin Commands 00:07:21.761 -------------- 00:07:21.761 Delete I/O Submission Queue (00h): Supported 00:07:21.761 Create I/O Submission Queue (01h): Supported 00:07:21.761 Get Log Page (02h): Supported 00:07:21.761 Delete I/O Completion Queue (04h): Supported 00:07:21.761 Create I/O Completion Queue (05h): Supported 00:07:21.761 Identify (06h): Supported 00:07:21.761 Abort (08h): Supported 00:07:21.761 Set Features (09h): Supported 00:07:21.761 Get Features (0Ah): Supported 00:07:21.761 Asynchronous Event Request (0Ch): Supported 00:07:21.761 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.761 Directive Send (19h): Supported 00:07:21.761 Directive Receive (1Ah): Supported 00:07:21.761 Virtualization Management (1Ch): Supported 00:07:21.761 Doorbell Buffer Config (7Ch): Supported 00:07:21.761 Format NVM (80h): Supported LBA-Change 00:07:21.761 I/O Commands 00:07:21.761 ------------ 00:07:21.761 Flush (00h): Supported LBA-Change 00:07:21.761 Write (01h): Supported LBA-Change 00:07:21.761 Read (02h): Supported 00:07:21.761 Compare (05h): Supported 00:07:21.761 Write Zeroes (08h): Supported LBA-Change 00:07:21.761 Dataset Management (09h): Supported LBA-Change 00:07:21.761 Unknown (0Ch): Supported 00:07:21.761 Unknown (12h): Supported 00:07:21.761 Copy (19h): Supported LBA-Change 00:07:21.761 Unknown (1Dh): Supported LBA-Change 00:07:21.761 00:07:21.761 Error Log 00:07:21.761 ========= 00:07:21.761 00:07:21.761 Arbitration 00:07:21.761 =========== 00:07:21.761 Arbitration Burst: no limit 00:07:21.761 00:07:21.761 Power Management 00:07:21.761 ================ 00:07:21.761 Number of Power States: 1 00:07:21.761 Current Power State: Power State #0 00:07:21.761 Power State #0: 00:07:21.761 Max Power: 25.00 W 00:07:21.761 Non-Operational State: Operational 00:07:21.761 Entry Latency: 16 microseconds 00:07:21.761 Exit Latency: 4 microseconds 00:07:21.761 Relative Read Throughput: 0 00:07:21.761 Relative Read Latency: 0 00:07:21.761 Relative Write Throughput: 0 00:07:21.761 Relative Write Latency: 0 00:07:21.761 Idle Power: Not Reported 00:07:21.761 Active Power: Not Reported 00:07:21.761 Non-Operational Permissive Mode: Not Supported 00:07:21.761 00:07:21.761 Health Information 00:07:21.761 ================== 00:07:21.761 Critical Warnings: 00:07:21.761 Available Spare Space: OK 00:07:21.761 Temperature: [2024-11-03 04:27:44.636719] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62835 terminated unexpected 00:07:21.761 OK 00:07:21.761 Device Reliability: OK 00:07:21.761 Read Only: No 00:07:21.761 Volatile Memory Backup: OK 00:07:21.761 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.761 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.761 Available Spare: 0% 00:07:21.761 Available Spare Threshold: 0% 00:07:21.761 Life Percentage Used: 0% 00:07:21.761 Data Units Read: 1055 00:07:21.761 Data Units Written: 922 00:07:21.761 Host Read Commands: 55100 00:07:21.761 Host Write Commands: 53896 00:07:21.761 Controller Busy Time: 0 minutes 00:07:21.761 Power Cycles: 0 00:07:21.761 Power On Hours: 0 hours 00:07:21.761 Unsafe Shutdowns: 0 00:07:21.761 Unrecoverable Media Errors: 0 00:07:21.761 Lifetime Error Log Entries: 0 00:07:21.761 Warning Temperature Time: 0 minutes 00:07:21.761 Critical Temperature Time: 0 minutes 00:07:21.761 00:07:21.761 Number of Queues 00:07:21.761 ================ 00:07:21.761 Number of I/O Submission Queues: 64 00:07:21.761 Number of I/O Completion Queues: 64 00:07:21.761 00:07:21.761 ZNS Specific Controller Data 00:07:21.761 ============================ 00:07:21.761 Zone Append Size Limit: 0 00:07:21.761 00:07:21.761 00:07:21.761 Active Namespaces 00:07:21.761 ================= 00:07:21.761 Namespace ID:1 00:07:21.761 Error Recovery Timeout: Unlimited 00:07:21.761 Command Set Identifier: NVM (00h) 00:07:21.761 Deallocate: Supported 00:07:21.761 Deallocated/Unwritten Error: Supported 00:07:21.761 Deallocated Read Value: All 0x00 00:07:21.761 Deallocate in Write Zeroes: Not Supported 00:07:21.761 Deallocated Guard Field: 0xFFFF 00:07:21.761 Flush: Supported 00:07:21.761 Reservation: Not Supported 00:07:21.761 Namespace Sharing Capabilities: Private 00:07:21.761 Size (in LBAs): 1310720 (5GiB) 00:07:21.761 Capacity (in LBAs): 1310720 (5GiB) 00:07:21.761 Utilization (in LBAs): 1310720 (5GiB) 00:07:21.761 Thin Provisioning: Not Supported 00:07:21.761 Per-NS Atomic Units: No 00:07:21.761 Maximum Single Source Range Length: 128 00:07:21.761 Maximum Copy Length: 128 00:07:21.761 Maximum Source Range Count: 128 00:07:21.761 NGUID/EUI64 Never Reused: No 00:07:21.761 Namespace Write Protected: No 00:07:21.761 Number of LBA Formats: 8 00:07:21.761 Current LBA Format: LBA Format #04 00:07:21.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.761 00:07:21.761 NVM Specific Namespace Data 00:07:21.761 =========================== 00:07:21.761 Logical Block Storage Tag Mask: 0 00:07:21.761 Protection Information Capabilities: 00:07:21.761 16b Guard Protection Information Storage Tag Support: No 00:07:21.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.761 Storage Tag Check Read Support: No 00:07:21.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.761 ===================================================== 00:07:21.761 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:21.761 ===================================================== 00:07:21.761 Controller Capabilities/Features 00:07:21.761 ================================ 00:07:21.761 Vendor ID: 1b36 00:07:21.761 Subsystem Vendor ID: 1af4 00:07:21.761 Serial Number: 12343 00:07:21.761 Model Number: QEMU NVMe Ctrl 00:07:21.761 Firmware Version: 8.0.0 00:07:21.761 Recommended Arb Burst: 6 00:07:21.761 IEEE OUI Identifier: 00 54 52 00:07:21.761 Multi-path I/O 00:07:21.761 May have multiple subsystem ports: No 00:07:21.761 May have multiple controllers: Yes 00:07:21.761 Associated with SR-IOV VF: No 00:07:21.761 Max Data Transfer Size: 524288 00:07:21.761 Max Number of Namespaces: 256 00:07:21.761 Max Number of I/O Queues: 64 00:07:21.761 NVMe Specification Version (VS): 1.4 00:07:21.761 NVMe Specification Version (Identify): 1.4 00:07:21.761 Maximum Queue Entries: 2048 00:07:21.761 Contiguous Queues Required: Yes 00:07:21.761 Arbitration Mechanisms Supported 00:07:21.761 Weighted Round Robin: Not Supported 00:07:21.761 Vendor Specific: Not Supported 00:07:21.761 Reset Timeout: 7500 ms 00:07:21.761 Doorbell Stride: 4 bytes 00:07:21.761 NVM Subsystem Reset: Not Supported 00:07:21.761 Command Sets Supported 00:07:21.761 NVM Command Set: Supported 00:07:21.761 Boot Partition: Not Supported 00:07:21.761 Memory Page Size Minimum: 4096 bytes 00:07:21.761 Memory Page Size Maximum: 65536 bytes 00:07:21.761 Persistent Memory Region: Not Supported 00:07:21.761 Optional Asynchronous Events Supported 00:07:21.762 Namespace Attribute Notices: Supported 00:07:21.762 Firmware Activation Notices: Not Supported 00:07:21.762 ANA Change Notices: Not Supported 00:07:21.762 PLE Aggregate Log Change Notices: Not Supported 00:07:21.762 LBA Status Info Alert Notices: Not Supported 00:07:21.762 EGE Aggregate Log Change Notices: Not Supported 00:07:21.762 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.762 Zone Descriptor Change Notices: Not Supported 00:07:21.762 Discovery Log Change Notices: Not Supported 00:07:21.762 Controller Attributes 00:07:21.762 128-bit Host Identifier: Not Supported 00:07:21.762 Non-Operational Permissive Mode: Not Supported 00:07:21.762 NVM Sets: Not Supported 00:07:21.762 Read Recovery Levels: Not Supported 00:07:21.762 Endurance Groups: Supported 00:07:21.762 Predictable Latency Mode: Not Supported 00:07:21.762 Traffic Based Keep ALive: Not Supported 00:07:21.762 Namespace Granularity: Not Supported 00:07:21.762 SQ Associations: Not Supported 00:07:21.762 UUID List: Not Supported 00:07:21.762 Multi-Domain Subsystem: Not Supported 00:07:21.762 Fixed Capacity Management: Not Supported 00:07:21.762 Variable Capacity Management: Not Supported 00:07:21.762 Delete Endurance Group: Not Supported 00:07:21.762 Delete NVM Set: Not Supported 00:07:21.762 Extended LBA Formats Supported: Supported 00:07:21.762 Flexible Data Placement Supported: Supported 00:07:21.762 00:07:21.762 Controller Memory Buffer Support 00:07:21.762 ================================ 00:07:21.762 Supported: No 00:07:21.762 00:07:21.762 Persistent Memory Region Support 00:07:21.762 ================================ 00:07:21.762 Supported: No 00:07:21.762 00:07:21.762 Admin Command Set Attributes 00:07:21.762 ============================ 00:07:21.762 Security Send/Receive: Not Supported 00:07:21.762 Format NVM: Supported 00:07:21.762 Firmware Activate/Download: Not Supported 00:07:21.762 Namespace Management: Supported 00:07:21.762 Device Self-Test: Not Supported 00:07:21.762 Directives: Supported 00:07:21.762 NVMe-MI: Not Supported 00:07:21.762 Virtualization Management: Not Supported 00:07:21.762 Doorbell Buffer Config: Supported 00:07:21.762 Get LBA Status Capability: Not Supported 00:07:21.762 Command & Feature Lockdown Capability: Not Supported 00:07:21.762 Abort Command Limit: 4 00:07:21.762 Async Event Request Limit: 4 00:07:21.762 Number of Firmware Slots: N/A 00:07:21.762 Firmware Slot 1 Read-Only: N/A 00:07:21.762 Firmware Activation Without Reset: N/A 00:07:21.762 Multiple Update Detection Support: N/A 00:07:21.762 Firmware Update Granularity: No Information Provided 00:07:21.762 Per-Namespace SMART Log: Yes 00:07:21.762 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.762 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:21.762 Command Effects Log Page: Supported 00:07:21.762 Get Log Page Extended Data: Supported 00:07:21.762 Telemetry Log Pages: Not Supported 00:07:21.762 Persistent Event Log Pages: Not Supported 00:07:21.762 Supported Log Pages Log Page: May Support 00:07:21.762 Commands Supported & Effects Log Page: Not Supported 00:07:21.762 Feature Identifiers & Effects Log Page:May Support 00:07:21.762 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.762 Data Area 4 for Telemetry Log: Not Supported 00:07:21.762 Error Log Page Entries Supported: 1 00:07:21.762 Keep Alive: Not Supported 00:07:21.762 00:07:21.762 NVM Command Set Attributes 00:07:21.762 ========================== 00:07:21.762 Submission Queue Entry Size 00:07:21.762 Max: 64 00:07:21.762 Min: 64 00:07:21.762 Completion Queue Entry Size 00:07:21.762 Max: 16 00:07:21.762 Min: 16 00:07:21.762 Number of Namespaces: 256 00:07:21.762 Compare Command: Supported 00:07:21.762 Write Uncorrectable Command: Not Supported 00:07:21.762 Dataset Management Command: Supported 00:07:21.762 Write Zeroes Command: Supported 00:07:21.762 Set Features Save Field: Supported 00:07:21.762 Reservations: Not Supported 00:07:21.762 Timestamp: Supported 00:07:21.762 Copy: Supported 00:07:21.762 Volatile Write Cache: Present 00:07:21.762 Atomic Write Unit (Normal): 1 00:07:21.762 Atomic Write Unit (PFail): 1 00:07:21.762 Atomic Compare & Write Unit: 1 00:07:21.762 Fused Compare & Write: Not Supported 00:07:21.762 Scatter-Gather List 00:07:21.762 SGL Command Set: Supported 00:07:21.762 SGL Keyed: Not Supported 00:07:21.762 SGL Bit Bucket Descriptor: Not Supported 00:07:21.762 SGL Metadata Pointer: Not Supported 00:07:21.762 Oversized SGL: Not Supported 00:07:21.762 SGL Metadata Address: Not Supported 00:07:21.762 SGL Offset: Not Supported 00:07:21.762 Transport SGL Data Block: Not Supported 00:07:21.762 Replay Protected Memory Block: Not Supported 00:07:21.762 00:07:21.762 Firmware Slot Information 00:07:21.762 ========================= 00:07:21.762 Active slot: 1 00:07:21.762 Slot 1 Firmware Revision: 1.0 00:07:21.762 00:07:21.762 00:07:21.762 Commands Supported and Effects 00:07:21.762 ============================== 00:07:21.762 Admin Commands 00:07:21.762 -------------- 00:07:21.762 Delete I/O Submission Queue (00h): Supported 00:07:21.762 Create I/O Submission Queue (01h): Supported 00:07:21.762 Get Log Page (02h): Supported 00:07:21.762 Delete I/O Completion Queue (04h): Supported 00:07:21.762 Create I/O Completion Queue (05h): Supported 00:07:21.762 Identify (06h): Supported 00:07:21.762 Abort (08h): Supported 00:07:21.762 Set Features (09h): Supported 00:07:21.762 Get Features (0Ah): Supported 00:07:21.762 Asynchronous Event Request (0Ch): Supported 00:07:21.762 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.762 Directive Send (19h): Supported 00:07:21.762 Directive Receive (1Ah): Supported 00:07:21.762 Virtualization Management (1Ch): Supported 00:07:21.762 Doorbell Buffer Config (7Ch): Supported 00:07:21.762 Format NVM (80h): Supported LBA-Change 00:07:21.762 I/O Commands 00:07:21.762 ------------ 00:07:21.762 Flush (00h): Supported LBA-Change 00:07:21.762 Write (01h): Supported LBA-Change 00:07:21.762 Read (02h): Supported 00:07:21.762 Compare (05h): Supported 00:07:21.762 Write Zeroes (08h): Supported LBA-Change 00:07:21.762 Dataset Management (09h): Supported LBA-Change 00:07:21.762 Unknown (0Ch): Supported 00:07:21.762 Unknown (12h): Supported 00:07:21.762 Copy (19h): Supported LBA-Change 00:07:21.762 Unknown (1Dh): Supported LBA-Change 00:07:21.762 00:07:21.762 Error Log 00:07:21.762 ========= 00:07:21.762 00:07:21.762 Arbitration 00:07:21.762 =========== 00:07:21.762 Arbitration Burst: no limit 00:07:21.762 00:07:21.762 Power Management 00:07:21.762 ================ 00:07:21.762 Number of Power States: 1 00:07:21.762 Current Power State: Power State #0 00:07:21.762 Power State #0: 00:07:21.762 Max Power: 25.00 W 00:07:21.762 Non-Operational State: Operational 00:07:21.762 Entry Latency: 16 microseconds 00:07:21.762 Exit Latency: 4 microseconds 00:07:21.762 Relative Read Throughput: 0 00:07:21.762 Relative Read Latency: 0 00:07:21.762 Relative Write Throughput: 0 00:07:21.762 Relative Write Latency: 0 00:07:21.762 Idle Power: Not Reported 00:07:21.762 Active Power: Not Reported 00:07:21.762 Non-Operational Permissive Mode: Not Supported 00:07:21.762 00:07:21.762 Health Information 00:07:21.762 ================== 00:07:21.762 Critical Warnings: 00:07:21.762 Available Spare Space: OK 00:07:21.762 Temperature: OK 00:07:21.762 Device Reliability: OK 00:07:21.762 Read Only: No 00:07:21.762 Volatile Memory Backup: OK 00:07:21.762 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.762 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.762 Available Spare: 0% 00:07:21.762 Available Spare Threshold: 0% 00:07:21.762 Life Percentage Used: 0% 00:07:21.762 Data Units Read: 766 00:07:21.762 Data Units Written: 695 00:07:21.762 Host Read Commands: 38044 00:07:21.762 Host Write Commands: 37467 00:07:21.762 Controller Busy Time: 0 minutes 00:07:21.762 Power Cycles: 0 00:07:21.762 Power On Hours: 0 hours 00:07:21.762 Unsafe Shutdowns: 0 00:07:21.762 Unrecoverable Media Errors: 0 00:07:21.762 Lifetime Error Log Entries: 0 00:07:21.762 Warning Temperature Time: 0 minutes 00:07:21.762 Critical Temperature Time: 0 minutes 00:07:21.762 00:07:21.762 Number of Queues 00:07:21.762 ================ 00:07:21.762 Number of I/O Submission Queues: 64 00:07:21.762 Number of I/O Completion Queues: 64 00:07:21.762 00:07:21.762 ZNS Specific Controller Data 00:07:21.762 ============================ 00:07:21.762 Zone Append Size Limit: 0 00:07:21.762 00:07:21.762 00:07:21.762 Active Namespaces 00:07:21.762 ================= 00:07:21.762 Namespace ID:1 00:07:21.762 Error Recovery Timeout: Unlimited 00:07:21.763 Command Set Identifier: NVM (00h) 00:07:21.763 Deallocate: Supported 00:07:21.763 Deallocated/Unwritten Error: Supported 00:07:21.763 Deallocated Read Value: All 0x00 00:07:21.763 Deallocate in Write Zeroes: Not Supported 00:07:21.763 Deallocated Guard Field: 0xFFFF 00:07:21.763 Flush: Supported 00:07:21.763 Reservation: Not Supported 00:07:21.763 Namespace Sharing Capabilities: Multiple Controllers 00:07:21.763 Size (in LBAs): 262144 (1GiB) 00:07:21.763 Capacity (in LBAs): 262144 (1GiB) 00:07:21.763 Utilization (in LBAs): 262144 (1GiB) 00:07:21.763 Thin Provisioning: Not Supported 00:07:21.763 Per-NS Atomic Units: No 00:07:21.763 Maximum Single Source Range Length: 128 00:07:21.763 Maximum Copy Length: 128 00:07:21.763 Maximum Source Range Count: 128 00:07:21.763 NGUID/EUI64 Never Reused: No 00:07:21.763 Namespace Write Protected: No 00:07:21.763 Endurance group ID: 1 00:07:21.763 Number of LBA Formats: 8 00:07:21.763 Current LBA Format: LBA Format #04 00:07:21.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.763 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.763 00:07:21.763 Get Feature FDP: 00:07:21.763 ================ 00:07:21.763 Enabled: Yes 00:07:21.763 FDP configuration index: 0 00:07:21.763 00:07:21.763 FDP configurations log page 00:07:21.763 =========================== 00:07:21.763 Number of FDP configurations: 1 00:07:21.763 Version: 0 00:07:21.763 Size: 112 00:07:21.763 FDP Configuration Descriptor: 0 00:07:21.763 Descriptor Size: 96 00:07:21.763 Reclaim Group Identifier format: 2 00:07:21.763 FDP Volatile Write Cache: Not Present 00:07:21.763 FDP Configuration: Valid 00:07:21.763 Vendor Specific Size: 0 00:07:21.763 Number of Reclaim Groups: 2 00:07:21.763 Number of Recalim Unit Handles: 8 00:07:21.763 Max Placement Identifiers: 128 00:07:21.763 Number of Namespaces Suppprted: 256 00:07:21.763 Reclaim unit Nominal Size: 6000000 bytes 00:07:21.763 Estimated Reclaim Unit Time Limit: Not Reported 00:07:21.763 RUH Desc #000: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #001: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #002: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #003: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #004: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #005: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #006: RUH Type: Initially Isolated 00:07:21.763 RUH Desc #007: RUH Type: Initially Isolated 00:07:21.763 00:07:21.763 FDP reclaim unit handle usage log page 00:07:21.763 ====================================== 00:07:21.763 Number of Reclaim Unit Handles: 8 00:07:21.763 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:21.763 RUH Usage Desc #001: RUH Attributes: Unused 00:07:21.763 RUH Usage Desc #002: RUH Attributes: Unused 00:07:21.763 RUH Usage Desc #003: RUH Attributes: Unused 00:07:21.763 RUH Usage Desc #004: RUH Attributes: Unused 00:07:21.763 RUH Usage Desc #005: RUH Attributes: Unused 00:07:21.763 RUH Usage Desc #006: RUH Attributes: Unused 00:07:21.763 RUH Usage Desc #007: RUH Attributes: Unused 00:07:21.763 00:07:21.763 FDP statistics log page 00:07:21.763 ======================= 00:07:21.763 Host bytes with metadata written: 445947904 00:07:21.763 Medi[2024-11-03 04:27:44.638037] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62835 terminated unexpected 00:07:21.763 a bytes with metadata written: 445992960 00:07:21.763 Media bytes erased: 0 00:07:21.763 00:07:21.763 FDP events log page 00:07:21.763 =================== 00:07:21.763 Number of FDP events: 0 00:07:21.763 00:07:21.763 NVM Specific Namespace Data 00:07:21.763 =========================== 00:07:21.763 Logical Block Storage Tag Mask: 0 00:07:21.763 Protection Information Capabilities: 00:07:21.763 16b Guard Protection Information Storage Tag Support: No 00:07:21.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.763 Storage Tag Check Read Support: No 00:07:21.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.763 ===================================================== 00:07:21.763 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:21.763 ===================================================== 00:07:21.763 Controller Capabilities/Features 00:07:21.763 ================================ 00:07:21.763 Vendor ID: 1b36 00:07:21.763 Subsystem Vendor ID: 1af4 00:07:21.763 Serial Number: 12342 00:07:21.763 Model Number: QEMU NVMe Ctrl 00:07:21.763 Firmware Version: 8.0.0 00:07:21.763 Recommended Arb Burst: 6 00:07:21.763 IEEE OUI Identifier: 00 54 52 00:07:21.763 Multi-path I/O 00:07:21.763 May have multiple subsystem ports: No 00:07:21.763 May have multiple controllers: No 00:07:21.763 Associated with SR-IOV VF: No 00:07:21.763 Max Data Transfer Size: 524288 00:07:21.763 Max Number of Namespaces: 256 00:07:21.763 Max Number of I/O Queues: 64 00:07:21.763 NVMe Specification Version (VS): 1.4 00:07:21.763 NVMe Specification Version (Identify): 1.4 00:07:21.763 Maximum Queue Entries: 2048 00:07:21.763 Contiguous Queues Required: Yes 00:07:21.763 Arbitration Mechanisms Supported 00:07:21.763 Weighted Round Robin: Not Supported 00:07:21.763 Vendor Specific: Not Supported 00:07:21.763 Reset Timeout: 7500 ms 00:07:21.763 Doorbell Stride: 4 bytes 00:07:21.763 NVM Subsystem Reset: Not Supported 00:07:21.763 Command Sets Supported 00:07:21.763 NVM Command Set: Supported 00:07:21.763 Boot Partition: Not Supported 00:07:21.763 Memory Page Size Minimum: 4096 bytes 00:07:21.763 Memory Page Size Maximum: 65536 bytes 00:07:21.763 Persistent Memory Region: Not Supported 00:07:21.763 Optional Asynchronous Events Supported 00:07:21.763 Namespace Attribute Notices: Supported 00:07:21.763 Firmware Activation Notices: Not Supported 00:07:21.763 ANA Change Notices: Not Supported 00:07:21.763 PLE Aggregate Log Change Notices: Not Supported 00:07:21.763 LBA Status Info Alert Notices: Not Supported 00:07:21.763 EGE Aggregate Log Change Notices: Not Supported 00:07:21.763 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.763 Zone Descriptor Change Notices: Not Supported 00:07:21.763 Discovery Log Change Notices: Not Supported 00:07:21.763 Controller Attributes 00:07:21.763 128-bit Host Identifier: Not Supported 00:07:21.763 Non-Operational Permissive Mode: Not Supported 00:07:21.763 NVM Sets: Not Supported 00:07:21.763 Read Recovery Levels: Not Supported 00:07:21.763 Endurance Groups: Not Supported 00:07:21.763 Predictable Latency Mode: Not Supported 00:07:21.763 Traffic Based Keep ALive: Not Supported 00:07:21.763 Namespace Granularity: Not Supported 00:07:21.763 SQ Associations: Not Supported 00:07:21.764 UUID List: Not Supported 00:07:21.764 Multi-Domain Subsystem: Not Supported 00:07:21.764 Fixed Capacity Management: Not Supported 00:07:21.764 Variable Capacity Management: Not Supported 00:07:21.764 Delete Endurance Group: Not Supported 00:07:21.764 Delete NVM Set: Not Supported 00:07:21.764 Extended LBA Formats Supported: Supported 00:07:21.764 Flexible Data Placement Supported: Not Supported 00:07:21.764 00:07:21.764 Controller Memory Buffer Support 00:07:21.764 ================================ 00:07:21.764 Supported: No 00:07:21.764 00:07:21.764 Persistent Memory Region Support 00:07:21.764 ================================ 00:07:21.764 Supported: No 00:07:21.764 00:07:21.764 Admin Command Set Attributes 00:07:21.764 ============================ 00:07:21.764 Security Send/Receive: Not Supported 00:07:21.764 Format NVM: Supported 00:07:21.764 Firmware Activate/Download: Not Supported 00:07:21.764 Namespace Management: Supported 00:07:21.764 Device Self-Test: Not Supported 00:07:21.764 Directives: Supported 00:07:21.764 NVMe-MI: Not Supported 00:07:21.764 Virtualization Management: Not Supported 00:07:21.764 Doorbell Buffer Config: Supported 00:07:21.764 Get LBA Status Capability: Not Supported 00:07:21.764 Command & Feature Lockdown Capability: Not Supported 00:07:21.764 Abort Command Limit: 4 00:07:21.764 Async Event Request Limit: 4 00:07:21.764 Number of Firmware Slots: N/A 00:07:21.764 Firmware Slot 1 Read-Only: N/A 00:07:21.764 Firmware Activation Without Reset: N/A 00:07:21.764 Multiple Update Detection Support: N/A 00:07:21.764 Firmware Update Granularity: No Information Provided 00:07:21.764 Per-Namespace SMART Log: Yes 00:07:21.764 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.764 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:21.764 Command Effects Log Page: Supported 00:07:21.764 Get Log Page Extended Data: Supported 00:07:21.764 Telemetry Log Pages: Not Supported 00:07:21.764 Persistent Event Log Pages: Not Supported 00:07:21.764 Supported Log Pages Log Page: May Support 00:07:21.764 Commands Supported & Effects Log Page: Not Supported 00:07:21.764 Feature Identifiers & Effects Log Page:May Support 00:07:21.764 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.764 Data Area 4 for Telemetry Log: Not Supported 00:07:21.764 Error Log Page Entries Supported: 1 00:07:21.764 Keep Alive: Not Supported 00:07:21.764 00:07:21.764 NVM Command Set Attributes 00:07:21.764 ========================== 00:07:21.764 Submission Queue Entry Size 00:07:21.764 Max: 64 00:07:21.764 Min: 64 00:07:21.764 Completion Queue Entry Size 00:07:21.764 Max: 16 00:07:21.764 Min: 16 00:07:21.764 Number of Namespaces: 256 00:07:21.764 Compare Command: Supported 00:07:21.764 Write Uncorrectable Command: Not Supported 00:07:21.764 Dataset Management Command: Supported 00:07:21.764 Write Zeroes Command: Supported 00:07:21.764 Set Features Save Field: Supported 00:07:21.764 Reservations: Not Supported 00:07:21.764 Timestamp: Supported 00:07:21.764 Copy: Supported 00:07:21.764 Volatile Write Cache: Present 00:07:21.764 Atomic Write Unit (Normal): 1 00:07:21.764 Atomic Write Unit (PFail): 1 00:07:21.764 Atomic Compare & Write Unit: 1 00:07:21.764 Fused Compare & Write: Not Supported 00:07:21.764 Scatter-Gather List 00:07:21.764 SGL Command Set: Supported 00:07:21.764 SGL Keyed: Not Supported 00:07:21.764 SGL Bit Bucket Descriptor: Not Supported 00:07:21.764 SGL Metadata Pointer: Not Supported 00:07:21.764 Oversized SGL: Not Supported 00:07:21.764 SGL Metadata Address: Not Supported 00:07:21.764 SGL Offset: Not Supported 00:07:21.764 Transport SGL Data Block: Not Supported 00:07:21.764 Replay Protected Memory Block: Not Supported 00:07:21.764 00:07:21.764 Firmware Slot Information 00:07:21.764 ========================= 00:07:21.764 Active slot: 1 00:07:21.764 Slot 1 Firmware Revision: 1.0 00:07:21.764 00:07:21.764 00:07:21.764 Commands Supported and Effects 00:07:21.764 ============================== 00:07:21.764 Admin Commands 00:07:21.764 -------------- 00:07:21.764 Delete I/O Submission Queue (00h): Supported 00:07:21.764 Create I/O Submission Queue (01h): Supported 00:07:21.764 Get Log Page (02h): Supported 00:07:21.764 Delete I/O Completion Queue (04h): Supported 00:07:21.764 Create I/O Completion Queue (05h): Supported 00:07:21.764 Identify (06h): Supported 00:07:21.764 Abort (08h): Supported 00:07:21.764 Set Features (09h): Supported 00:07:21.764 Get Features (0Ah): Supported 00:07:21.764 Asynchronous Event Request (0Ch): Supported 00:07:21.764 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.764 Directive Send (19h): Supported 00:07:21.764 Directive Receive (1Ah): Supported 00:07:21.764 Virtualization Management (1Ch): Supported 00:07:21.764 Doorbell Buffer Config (7Ch): Supported 00:07:21.764 Format NVM (80h): Supported LBA-Change 00:07:21.764 I/O Commands 00:07:21.764 ------------ 00:07:21.764 Flush (00h): Supported LBA-Change 00:07:21.764 Write (01h): Supported LBA-Change 00:07:21.764 Read (02h): Supported 00:07:21.764 Compare (05h): Supported 00:07:21.764 Write Zeroes (08h): Supported LBA-Change 00:07:21.764 Dataset Management (09h): Supported LBA-Change 00:07:21.764 Unknown (0Ch): Supported 00:07:21.764 Unknown (12h): Supported 00:07:21.764 Copy (19h): Supported LBA-Change 00:07:21.764 Unknown (1Dh): Supported LBA-Change 00:07:21.764 00:07:21.764 Error Log 00:07:21.764 ========= 00:07:21.764 00:07:21.764 Arbitration 00:07:21.764 =========== 00:07:21.764 Arbitration Burst: no limit 00:07:21.764 00:07:21.764 Power Management 00:07:21.764 ================ 00:07:21.764 Number of Power States: 1 00:07:21.764 Current Power State: Power State #0 00:07:21.764 Power State #0: 00:07:21.764 Max Power: 25.00 W 00:07:21.764 Non-Operational State: Operational 00:07:21.764 Entry Latency: 16 microseconds 00:07:21.764 Exit Latency: 4 microseconds 00:07:21.764 Relative Read Throughput: 0 00:07:21.764 Relative Read Latency: 0 00:07:21.764 Relative Write Throughput: 0 00:07:21.764 Relative Write Latency: 0 00:07:21.764 Idle Power: Not Reported 00:07:21.764 Active Power: Not Reported 00:07:21.764 Non-Operational Permissive Mode: Not Supported 00:07:21.764 00:07:21.764 Health Information 00:07:21.764 ================== 00:07:21.764 Critical Warnings: 00:07:21.764 Available Spare Space: OK 00:07:21.764 Temperature: OK 00:07:21.764 Device Reliability: OK 00:07:21.764 Read Only: No 00:07:21.764 Volatile Memory Backup: OK 00:07:21.764 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.764 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.764 Available Spare: 0% 00:07:21.764 Available Spare Threshold: 0% 00:07:21.764 Life Percentage Used: 0% 00:07:21.764 Data Units Read: 2076 00:07:21.764 Data Units Written: 1863 00:07:21.764 Host Read Commands: 112215 00:07:21.764 Host Write Commands: 110484 00:07:21.764 Controller Busy Time: 0 minutes 00:07:21.764 Power Cycles: 0 00:07:21.764 Power On Hours: 0 hours 00:07:21.764 Unsafe Shutdowns: 0 00:07:21.764 Unrecoverable Media Errors: 0 00:07:21.764 Lifetime Error Log Entries: 0 00:07:21.764 Warning Temperature Time: 0 minutes 00:07:21.764 Critical Temperature Time: 0 minutes 00:07:21.764 00:07:21.764 Number of Queues 00:07:21.764 ================ 00:07:21.764 Number of I/O Submission Queues: 64 00:07:21.764 Number of I/O Completion Queues: 64 00:07:21.764 00:07:21.764 ZNS Specific Controller Data 00:07:21.764 ============================ 00:07:21.764 Zone Append Size Limit: 0 00:07:21.764 00:07:21.764 00:07:21.764 Active Namespaces 00:07:21.764 ================= 00:07:21.764 Namespace ID:1 00:07:21.764 Error Recovery Timeout: Unlimited 00:07:21.764 Command Set Identifier: NVM (00h) 00:07:21.764 Deallocate: Supported 00:07:21.764 Deallocated/Unwritten Error: Supported 00:07:21.764 Deallocated Read Value: All 0x00 00:07:21.764 Deallocate in Write Zeroes: Not Supported 00:07:21.764 Deallocated Guard Field: 0xFFFF 00:07:21.764 Flush: Supported 00:07:21.764 Reservation: Not Supported 00:07:21.764 Namespace Sharing Capabilities: Private 00:07:21.764 Size (in LBAs): 1048576 (4GiB) 00:07:21.764 Capacity (in LBAs): 1048576 (4GiB) 00:07:21.764 Utilization (in LBAs): 1048576 (4GiB) 00:07:21.764 Thin Provisioning: Not Supported 00:07:21.764 Per-NS Atomic Units: No 00:07:21.764 Maximum Single Source Range Length: 128 00:07:21.764 Maximum Copy Length: 128 00:07:21.764 Maximum Source Range Count: 128 00:07:21.765 NGUID/EUI64 Never Reused: No 00:07:21.765 Namespace Write Protected: No 00:07:21.765 Number of LBA Formats: 8 00:07:21.765 Current LBA Format: LBA Format #04 00:07:21.765 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.765 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.765 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.765 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.765 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.765 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.765 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.765 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.765 00:07:21.765 NVM Specific Namespace Data 00:07:21.765 =========================== 00:07:21.765 Logical Block Storage Tag Mask: 0 00:07:21.765 Protection Information Capabilities: 00:07:21.765 16b Guard Protection Information Storage Tag Support: No 00:07:21.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.765 Storage Tag Check Read Support: No 00:07:21.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Namespace ID:2 00:07:21.765 Error Recovery Timeout: Unlimited 00:07:21.765 Command Set Identifier: NVM (00h) 00:07:21.765 Deallocate: Supported 00:07:21.765 Deallocated/Unwritten Error: Supported 00:07:21.765 Deallocated Read Value: All 0x00 00:07:21.765 Deallocate in Write Zeroes: Not Supported 00:07:21.765 Deallocated Guard Field: 0xFFFF 00:07:21.765 Flush: Supported 00:07:21.765 Reservation: Not Supported 00:07:21.765 Namespace Sharing Capabilities: Private 00:07:21.765 Size (in LBAs): 1048576 (4GiB) 00:07:21.765 Capacity (in LBAs): 1048576 (4GiB) 00:07:21.765 Utilization (in LBAs): 1048576 (4GiB) 00:07:21.765 Thin Provisioning: Not Supported 00:07:21.765 Per-NS Atomic Units: No 00:07:21.765 Maximum Single Source Range Length: 128 00:07:21.765 Maximum Copy Length: 128 00:07:21.765 Maximum Source Range Count: 128 00:07:21.765 NGUID/EUI64 Never Reused: No 00:07:21.765 Namespace Write Protected: No 00:07:21.765 Number of LBA Formats: 8 00:07:21.765 Current LBA Format: LBA Format #04 00:07:21.765 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.765 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.765 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.765 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.765 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.765 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.765 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.765 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.765 00:07:21.765 NVM Specific Namespace Data 00:07:21.765 =========================== 00:07:21.765 Logical Block Storage Tag Mask: 0 00:07:21.765 Protection Information Capabilities: 00:07:21.765 16b Guard Protection Information Storage Tag Support: No 00:07:21.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.765 Storage Tag Check Read Support: No 00:07:21.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Namespace ID:3 00:07:21.765 Error Recovery Timeout: Unlimited 00:07:21.765 Command Set Identifier: NVM (00h) 00:07:21.765 Deallocate: Supported 00:07:21.765 Deallocated/Unwritten Error: Supported 00:07:21.765 Deallocated Read Value: All 0x00 00:07:21.765 Deallocate in Write Zeroes: Not Supported 00:07:21.765 Deallocated Guard Field: 0xFFFF 00:07:21.765 Flush: Supported 00:07:21.765 Reservation: Not Supported 00:07:21.765 Namespace Sharing Capabilities: Private 00:07:21.765 Size (in LBAs): 1048576 (4GiB) 00:07:21.765 Capacity (in LBAs): 1048576 (4GiB) 00:07:21.765 Utilization (in LBAs): 1048576 (4GiB) 00:07:21.765 Thin Provisioning: Not Supported 00:07:21.765 Per-NS Atomic Units: No 00:07:21.765 Maximum Single Source Range Length: 128 00:07:21.765 Maximum Copy Length: 128 00:07:21.765 Maximum Source Range Count: 128 00:07:21.765 NGUID/EUI64 Never Reused: No 00:07:21.765 Namespace Write Protected: No 00:07:21.765 Number of LBA Formats: 8 00:07:21.765 Current LBA Format: LBA Format #04 00:07:21.765 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.765 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.765 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.765 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.765 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.765 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.765 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.765 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.765 00:07:21.765 NVM Specific Namespace Data 00:07:21.765 =========================== 00:07:21.765 Logical Block Storage Tag Mask: 0 00:07:21.765 Protection Information Capabilities: 00:07:21.765 16b Guard Protection Information Storage Tag Support: No 00:07:21.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.765 Storage Tag Check Read Support: No 00:07:21.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.765 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:21.765 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:22.025 ===================================================== 00:07:22.025 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:22.025 ===================================================== 00:07:22.025 Controller Capabilities/Features 00:07:22.025 ================================ 00:07:22.025 Vendor ID: 1b36 00:07:22.025 Subsystem Vendor ID: 1af4 00:07:22.025 Serial Number: 12340 00:07:22.025 Model Number: QEMU NVMe Ctrl 00:07:22.025 Firmware Version: 8.0.0 00:07:22.025 Recommended Arb Burst: 6 00:07:22.025 IEEE OUI Identifier: 00 54 52 00:07:22.025 Multi-path I/O 00:07:22.025 May have multiple subsystem ports: No 00:07:22.025 May have multiple controllers: No 00:07:22.025 Associated with SR-IOV VF: No 00:07:22.025 Max Data Transfer Size: 524288 00:07:22.025 Max Number of Namespaces: 256 00:07:22.025 Max Number of I/O Queues: 64 00:07:22.025 NVMe Specification Version (VS): 1.4 00:07:22.025 NVMe Specification Version (Identify): 1.4 00:07:22.025 Maximum Queue Entries: 2048 00:07:22.025 Contiguous Queues Required: Yes 00:07:22.025 Arbitration Mechanisms Supported 00:07:22.025 Weighted Round Robin: Not Supported 00:07:22.025 Vendor Specific: Not Supported 00:07:22.025 Reset Timeout: 7500 ms 00:07:22.025 Doorbell Stride: 4 bytes 00:07:22.025 NVM Subsystem Reset: Not Supported 00:07:22.025 Command Sets Supported 00:07:22.025 NVM Command Set: Supported 00:07:22.025 Boot Partition: Not Supported 00:07:22.025 Memory Page Size Minimum: 4096 bytes 00:07:22.025 Memory Page Size Maximum: 65536 bytes 00:07:22.025 Persistent Memory Region: Not Supported 00:07:22.025 Optional Asynchronous Events Supported 00:07:22.025 Namespace Attribute Notices: Supported 00:07:22.025 Firmware Activation Notices: Not Supported 00:07:22.025 ANA Change Notices: Not Supported 00:07:22.025 PLE Aggregate Log Change Notices: Not Supported 00:07:22.025 LBA Status Info Alert Notices: Not Supported 00:07:22.025 EGE Aggregate Log Change Notices: Not Supported 00:07:22.025 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.025 Zone Descriptor Change Notices: Not Supported 00:07:22.025 Discovery Log Change Notices: Not Supported 00:07:22.025 Controller Attributes 00:07:22.025 128-bit Host Identifier: Not Supported 00:07:22.025 Non-Operational Permissive Mode: Not Supported 00:07:22.025 NVM Sets: Not Supported 00:07:22.025 Read Recovery Levels: Not Supported 00:07:22.025 Endurance Groups: Not Supported 00:07:22.025 Predictable Latency Mode: Not Supported 00:07:22.025 Traffic Based Keep ALive: Not Supported 00:07:22.025 Namespace Granularity: Not Supported 00:07:22.025 SQ Associations: Not Supported 00:07:22.025 UUID List: Not Supported 00:07:22.025 Multi-Domain Subsystem: Not Supported 00:07:22.025 Fixed Capacity Management: Not Supported 00:07:22.025 Variable Capacity Management: Not Supported 00:07:22.025 Delete Endurance Group: Not Supported 00:07:22.025 Delete NVM Set: Not Supported 00:07:22.025 Extended LBA Formats Supported: Supported 00:07:22.025 Flexible Data Placement Supported: Not Supported 00:07:22.025 00:07:22.025 Controller Memory Buffer Support 00:07:22.025 ================================ 00:07:22.025 Supported: No 00:07:22.025 00:07:22.025 Persistent Memory Region Support 00:07:22.025 ================================ 00:07:22.025 Supported: No 00:07:22.025 00:07:22.025 Admin Command Set Attributes 00:07:22.025 ============================ 00:07:22.025 Security Send/Receive: Not Supported 00:07:22.025 Format NVM: Supported 00:07:22.025 Firmware Activate/Download: Not Supported 00:07:22.025 Namespace Management: Supported 00:07:22.025 Device Self-Test: Not Supported 00:07:22.025 Directives: Supported 00:07:22.025 NVMe-MI: Not Supported 00:07:22.025 Virtualization Management: Not Supported 00:07:22.025 Doorbell Buffer Config: Supported 00:07:22.025 Get LBA Status Capability: Not Supported 00:07:22.025 Command & Feature Lockdown Capability: Not Supported 00:07:22.025 Abort Command Limit: 4 00:07:22.025 Async Event Request Limit: 4 00:07:22.025 Number of Firmware Slots: N/A 00:07:22.025 Firmware Slot 1 Read-Only: N/A 00:07:22.025 Firmware Activation Without Reset: N/A 00:07:22.025 Multiple Update Detection Support: N/A 00:07:22.025 Firmware Update Granularity: No Information Provided 00:07:22.025 Per-Namespace SMART Log: Yes 00:07:22.025 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.025 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:22.025 Command Effects Log Page: Supported 00:07:22.025 Get Log Page Extended Data: Supported 00:07:22.025 Telemetry Log Pages: Not Supported 00:07:22.025 Persistent Event Log Pages: Not Supported 00:07:22.025 Supported Log Pages Log Page: May Support 00:07:22.025 Commands Supported & Effects Log Page: Not Supported 00:07:22.025 Feature Identifiers & Effects Log Page:May Support 00:07:22.025 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.025 Data Area 4 for Telemetry Log: Not Supported 00:07:22.025 Error Log Page Entries Supported: 1 00:07:22.025 Keep Alive: Not Supported 00:07:22.025 00:07:22.025 NVM Command Set Attributes 00:07:22.025 ========================== 00:07:22.025 Submission Queue Entry Size 00:07:22.025 Max: 64 00:07:22.025 Min: 64 00:07:22.025 Completion Queue Entry Size 00:07:22.025 Max: 16 00:07:22.025 Min: 16 00:07:22.025 Number of Namespaces: 256 00:07:22.025 Compare Command: Supported 00:07:22.025 Write Uncorrectable Command: Not Supported 00:07:22.025 Dataset Management Command: Supported 00:07:22.025 Write Zeroes Command: Supported 00:07:22.025 Set Features Save Field: Supported 00:07:22.025 Reservations: Not Supported 00:07:22.025 Timestamp: Supported 00:07:22.025 Copy: Supported 00:07:22.025 Volatile Write Cache: Present 00:07:22.025 Atomic Write Unit (Normal): 1 00:07:22.025 Atomic Write Unit (PFail): 1 00:07:22.025 Atomic Compare & Write Unit: 1 00:07:22.025 Fused Compare & Write: Not Supported 00:07:22.025 Scatter-Gather List 00:07:22.025 SGL Command Set: Supported 00:07:22.025 SGL Keyed: Not Supported 00:07:22.025 SGL Bit Bucket Descriptor: Not Supported 00:07:22.025 SGL Metadata Pointer: Not Supported 00:07:22.025 Oversized SGL: Not Supported 00:07:22.025 SGL Metadata Address: Not Supported 00:07:22.025 SGL Offset: Not Supported 00:07:22.025 Transport SGL Data Block: Not Supported 00:07:22.025 Replay Protected Memory Block: Not Supported 00:07:22.025 00:07:22.025 Firmware Slot Information 00:07:22.025 ========================= 00:07:22.025 Active slot: 1 00:07:22.025 Slot 1 Firmware Revision: 1.0 00:07:22.025 00:07:22.025 00:07:22.025 Commands Supported and Effects 00:07:22.025 ============================== 00:07:22.025 Admin Commands 00:07:22.025 -------------- 00:07:22.025 Delete I/O Submission Queue (00h): Supported 00:07:22.025 Create I/O Submission Queue (01h): Supported 00:07:22.025 Get Log Page (02h): Supported 00:07:22.025 Delete I/O Completion Queue (04h): Supported 00:07:22.025 Create I/O Completion Queue (05h): Supported 00:07:22.026 Identify (06h): Supported 00:07:22.026 Abort (08h): Supported 00:07:22.026 Set Features (09h): Supported 00:07:22.026 Get Features (0Ah): Supported 00:07:22.026 Asynchronous Event Request (0Ch): Supported 00:07:22.026 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.026 Directive Send (19h): Supported 00:07:22.026 Directive Receive (1Ah): Supported 00:07:22.026 Virtualization Management (1Ch): Supported 00:07:22.026 Doorbell Buffer Config (7Ch): Supported 00:07:22.026 Format NVM (80h): Supported LBA-Change 00:07:22.026 I/O Commands 00:07:22.026 ------------ 00:07:22.026 Flush (00h): Supported LBA-Change 00:07:22.026 Write (01h): Supported LBA-Change 00:07:22.026 Read (02h): Supported 00:07:22.026 Compare (05h): Supported 00:07:22.026 Write Zeroes (08h): Supported LBA-Change 00:07:22.026 Dataset Management (09h): Supported LBA-Change 00:07:22.026 Unknown (0Ch): Supported 00:07:22.026 Unknown (12h): Supported 00:07:22.026 Copy (19h): Supported LBA-Change 00:07:22.026 Unknown (1Dh): Supported LBA-Change 00:07:22.026 00:07:22.026 Error Log 00:07:22.026 ========= 00:07:22.026 00:07:22.026 Arbitration 00:07:22.026 =========== 00:07:22.026 Arbitration Burst: no limit 00:07:22.026 00:07:22.026 Power Management 00:07:22.026 ================ 00:07:22.026 Number of Power States: 1 00:07:22.026 Current Power State: Power State #0 00:07:22.026 Power State #0: 00:07:22.026 Max Power: 25.00 W 00:07:22.026 Non-Operational State: Operational 00:07:22.026 Entry Latency: 16 microseconds 00:07:22.026 Exit Latency: 4 microseconds 00:07:22.026 Relative Read Throughput: 0 00:07:22.026 Relative Read Latency: 0 00:07:22.026 Relative Write Throughput: 0 00:07:22.026 Relative Write Latency: 0 00:07:22.026 Idle Power: Not Reported 00:07:22.026 Active Power: Not Reported 00:07:22.026 Non-Operational Permissive Mode: Not Supported 00:07:22.026 00:07:22.026 Health Information 00:07:22.026 ================== 00:07:22.026 Critical Warnings: 00:07:22.026 Available Spare Space: OK 00:07:22.026 Temperature: OK 00:07:22.026 Device Reliability: OK 00:07:22.026 Read Only: No 00:07:22.026 Volatile Memory Backup: OK 00:07:22.026 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.026 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.026 Available Spare: 0% 00:07:22.026 Available Spare Threshold: 0% 00:07:22.026 Life Percentage Used: 0% 00:07:22.026 Data Units Read: 642 00:07:22.026 Data Units Written: 570 00:07:22.026 Host Read Commands: 36836 00:07:22.026 Host Write Commands: 36622 00:07:22.026 Controller Busy Time: 0 minutes 00:07:22.026 Power Cycles: 0 00:07:22.026 Power On Hours: 0 hours 00:07:22.026 Unsafe Shutdowns: 0 00:07:22.026 Unrecoverable Media Errors: 0 00:07:22.026 Lifetime Error Log Entries: 0 00:07:22.026 Warning Temperature Time: 0 minutes 00:07:22.026 Critical Temperature Time: 0 minutes 00:07:22.026 00:07:22.026 Number of Queues 00:07:22.026 ================ 00:07:22.026 Number of I/O Submission Queues: 64 00:07:22.026 Number of I/O Completion Queues: 64 00:07:22.026 00:07:22.026 ZNS Specific Controller Data 00:07:22.026 ============================ 00:07:22.026 Zone Append Size Limit: 0 00:07:22.026 00:07:22.026 00:07:22.026 Active Namespaces 00:07:22.026 ================= 00:07:22.026 Namespace ID:1 00:07:22.026 Error Recovery Timeout: Unlimited 00:07:22.026 Command Set Identifier: NVM (00h) 00:07:22.026 Deallocate: Supported 00:07:22.026 Deallocated/Unwritten Error: Supported 00:07:22.026 Deallocated Read Value: All 0x00 00:07:22.026 Deallocate in Write Zeroes: Not Supported 00:07:22.026 Deallocated Guard Field: 0xFFFF 00:07:22.026 Flush: Supported 00:07:22.026 Reservation: Not Supported 00:07:22.026 Metadata Transferred as: Separate Metadata Buffer 00:07:22.026 Namespace Sharing Capabilities: Private 00:07:22.026 Size (in LBAs): 1548666 (5GiB) 00:07:22.026 Capacity (in LBAs): 1548666 (5GiB) 00:07:22.026 Utilization (in LBAs): 1548666 (5GiB) 00:07:22.026 Thin Provisioning: Not Supported 00:07:22.026 Per-NS Atomic Units: No 00:07:22.026 Maximum Single Source Range Length: 128 00:07:22.026 Maximum Copy Length: 128 00:07:22.026 Maximum Source Range Count: 128 00:07:22.026 NGUID/EUI64 Never Reused: No 00:07:22.026 Namespace Write Protected: No 00:07:22.026 Number of LBA Formats: 8 00:07:22.026 Current LBA Format: LBA Format #07 00:07:22.026 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.026 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.026 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.026 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.026 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.026 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.026 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.026 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.026 00:07:22.026 NVM Specific Namespace Data 00:07:22.026 =========================== 00:07:22.026 Logical Block Storage Tag Mask: 0 00:07:22.026 Protection Information Capabilities: 00:07:22.026 16b Guard Protection Information Storage Tag Support: No 00:07:22.026 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.026 Storage Tag Check Read Support: No 00:07:22.026 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.026 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:22.026 04:27:44 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:22.026 ===================================================== 00:07:22.026 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:22.026 ===================================================== 00:07:22.026 Controller Capabilities/Features 00:07:22.026 ================================ 00:07:22.026 Vendor ID: 1b36 00:07:22.026 Subsystem Vendor ID: 1af4 00:07:22.026 Serial Number: 12341 00:07:22.026 Model Number: QEMU NVMe Ctrl 00:07:22.026 Firmware Version: 8.0.0 00:07:22.026 Recommended Arb Burst: 6 00:07:22.026 IEEE OUI Identifier: 00 54 52 00:07:22.026 Multi-path I/O 00:07:22.026 May have multiple subsystem ports: No 00:07:22.026 May have multiple controllers: No 00:07:22.026 Associated with SR-IOV VF: No 00:07:22.026 Max Data Transfer Size: 524288 00:07:22.026 Max Number of Namespaces: 256 00:07:22.026 Max Number of I/O Queues: 64 00:07:22.026 NVMe Specification Version (VS): 1.4 00:07:22.026 NVMe Specification Version (Identify): 1.4 00:07:22.026 Maximum Queue Entries: 2048 00:07:22.026 Contiguous Queues Required: Yes 00:07:22.026 Arbitration Mechanisms Supported 00:07:22.026 Weighted Round Robin: Not Supported 00:07:22.026 Vendor Specific: Not Supported 00:07:22.026 Reset Timeout: 7500 ms 00:07:22.026 Doorbell Stride: 4 bytes 00:07:22.026 NVM Subsystem Reset: Not Supported 00:07:22.026 Command Sets Supported 00:07:22.026 NVM Command Set: Supported 00:07:22.026 Boot Partition: Not Supported 00:07:22.026 Memory Page Size Minimum: 4096 bytes 00:07:22.026 Memory Page Size Maximum: 65536 bytes 00:07:22.026 Persistent Memory Region: Not Supported 00:07:22.026 Optional Asynchronous Events Supported 00:07:22.026 Namespace Attribute Notices: Supported 00:07:22.026 Firmware Activation Notices: Not Supported 00:07:22.026 ANA Change Notices: Not Supported 00:07:22.026 PLE Aggregate Log Change Notices: Not Supported 00:07:22.026 LBA Status Info Alert Notices: Not Supported 00:07:22.026 EGE Aggregate Log Change Notices: Not Supported 00:07:22.026 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.026 Zone Descriptor Change Notices: Not Supported 00:07:22.026 Discovery Log Change Notices: Not Supported 00:07:22.026 Controller Attributes 00:07:22.026 128-bit Host Identifier: Not Supported 00:07:22.026 Non-Operational Permissive Mode: Not Supported 00:07:22.026 NVM Sets: Not Supported 00:07:22.026 Read Recovery Levels: Not Supported 00:07:22.026 Endurance Groups: Not Supported 00:07:22.026 Predictable Latency Mode: Not Supported 00:07:22.026 Traffic Based Keep ALive: Not Supported 00:07:22.026 Namespace Granularity: Not Supported 00:07:22.026 SQ Associations: Not Supported 00:07:22.027 UUID List: Not Supported 00:07:22.027 Multi-Domain Subsystem: Not Supported 00:07:22.027 Fixed Capacity Management: Not Supported 00:07:22.027 Variable Capacity Management: Not Supported 00:07:22.027 Delete Endurance Group: Not Supported 00:07:22.027 Delete NVM Set: Not Supported 00:07:22.027 Extended LBA Formats Supported: Supported 00:07:22.027 Flexible Data Placement Supported: Not Supported 00:07:22.027 00:07:22.027 Controller Memory Buffer Support 00:07:22.027 ================================ 00:07:22.027 Supported: No 00:07:22.027 00:07:22.027 Persistent Memory Region Support 00:07:22.027 ================================ 00:07:22.027 Supported: No 00:07:22.027 00:07:22.027 Admin Command Set Attributes 00:07:22.027 ============================ 00:07:22.027 Security Send/Receive: Not Supported 00:07:22.027 Format NVM: Supported 00:07:22.027 Firmware Activate/Download: Not Supported 00:07:22.027 Namespace Management: Supported 00:07:22.027 Device Self-Test: Not Supported 00:07:22.027 Directives: Supported 00:07:22.027 NVMe-MI: Not Supported 00:07:22.027 Virtualization Management: Not Supported 00:07:22.027 Doorbell Buffer Config: Supported 00:07:22.027 Get LBA Status Capability: Not Supported 00:07:22.027 Command & Feature Lockdown Capability: Not Supported 00:07:22.027 Abort Command Limit: 4 00:07:22.027 Async Event Request Limit: 4 00:07:22.027 Number of Firmware Slots: N/A 00:07:22.027 Firmware Slot 1 Read-Only: N/A 00:07:22.027 Firmware Activation Without Reset: N/A 00:07:22.027 Multiple Update Detection Support: N/A 00:07:22.027 Firmware Update Granularity: No Information Provided 00:07:22.027 Per-Namespace SMART Log: Yes 00:07:22.027 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.027 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:22.027 Command Effects Log Page: Supported 00:07:22.027 Get Log Page Extended Data: Supported 00:07:22.027 Telemetry Log Pages: Not Supported 00:07:22.027 Persistent Event Log Pages: Not Supported 00:07:22.027 Supported Log Pages Log Page: May Support 00:07:22.027 Commands Supported & Effects Log Page: Not Supported 00:07:22.027 Feature Identifiers & Effects Log Page:May Support 00:07:22.027 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.027 Data Area 4 for Telemetry Log: Not Supported 00:07:22.027 Error Log Page Entries Supported: 1 00:07:22.027 Keep Alive: Not Supported 00:07:22.027 00:07:22.027 NVM Command Set Attributes 00:07:22.027 ========================== 00:07:22.027 Submission Queue Entry Size 00:07:22.027 Max: 64 00:07:22.027 Min: 64 00:07:22.027 Completion Queue Entry Size 00:07:22.027 Max: 16 00:07:22.027 Min: 16 00:07:22.027 Number of Namespaces: 256 00:07:22.027 Compare Command: Supported 00:07:22.027 Write Uncorrectable Command: Not Supported 00:07:22.027 Dataset Management Command: Supported 00:07:22.027 Write Zeroes Command: Supported 00:07:22.027 Set Features Save Field: Supported 00:07:22.027 Reservations: Not Supported 00:07:22.027 Timestamp: Supported 00:07:22.027 Copy: Supported 00:07:22.027 Volatile Write Cache: Present 00:07:22.027 Atomic Write Unit (Normal): 1 00:07:22.027 Atomic Write Unit (PFail): 1 00:07:22.027 Atomic Compare & Write Unit: 1 00:07:22.027 Fused Compare & Write: Not Supported 00:07:22.027 Scatter-Gather List 00:07:22.027 SGL Command Set: Supported 00:07:22.027 SGL Keyed: Not Supported 00:07:22.027 SGL Bit Bucket Descriptor: Not Supported 00:07:22.027 SGL Metadata Pointer: Not Supported 00:07:22.027 Oversized SGL: Not Supported 00:07:22.027 SGL Metadata Address: Not Supported 00:07:22.027 SGL Offset: Not Supported 00:07:22.027 Transport SGL Data Block: Not Supported 00:07:22.027 Replay Protected Memory Block: Not Supported 00:07:22.027 00:07:22.027 Firmware Slot Information 00:07:22.027 ========================= 00:07:22.027 Active slot: 1 00:07:22.027 Slot 1 Firmware Revision: 1.0 00:07:22.027 00:07:22.027 00:07:22.027 Commands Supported and Effects 00:07:22.027 ============================== 00:07:22.027 Admin Commands 00:07:22.027 -------------- 00:07:22.027 Delete I/O Submission Queue (00h): Supported 00:07:22.027 Create I/O Submission Queue (01h): Supported 00:07:22.027 Get Log Page (02h): Supported 00:07:22.027 Delete I/O Completion Queue (04h): Supported 00:07:22.027 Create I/O Completion Queue (05h): Supported 00:07:22.027 Identify (06h): Supported 00:07:22.027 Abort (08h): Supported 00:07:22.027 Set Features (09h): Supported 00:07:22.027 Get Features (0Ah): Supported 00:07:22.027 Asynchronous Event Request (0Ch): Supported 00:07:22.027 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.027 Directive Send (19h): Supported 00:07:22.027 Directive Receive (1Ah): Supported 00:07:22.027 Virtualization Management (1Ch): Supported 00:07:22.027 Doorbell Buffer Config (7Ch): Supported 00:07:22.027 Format NVM (80h): Supported LBA-Change 00:07:22.027 I/O Commands 00:07:22.027 ------------ 00:07:22.027 Flush (00h): Supported LBA-Change 00:07:22.027 Write (01h): Supported LBA-Change 00:07:22.027 Read (02h): Supported 00:07:22.027 Compare (05h): Supported 00:07:22.027 Write Zeroes (08h): Supported LBA-Change 00:07:22.027 Dataset Management (09h): Supported LBA-Change 00:07:22.027 Unknown (0Ch): Supported 00:07:22.027 Unknown (12h): Supported 00:07:22.027 Copy (19h): Supported LBA-Change 00:07:22.027 Unknown (1Dh): Supported LBA-Change 00:07:22.027 00:07:22.027 Error Log 00:07:22.027 ========= 00:07:22.027 00:07:22.027 Arbitration 00:07:22.027 =========== 00:07:22.027 Arbitration Burst: no limit 00:07:22.027 00:07:22.027 Power Management 00:07:22.027 ================ 00:07:22.027 Number of Power States: 1 00:07:22.027 Current Power State: Power State #0 00:07:22.027 Power State #0: 00:07:22.027 Max Power: 25.00 W 00:07:22.027 Non-Operational State: Operational 00:07:22.027 Entry Latency: 16 microseconds 00:07:22.027 Exit Latency: 4 microseconds 00:07:22.027 Relative Read Throughput: 0 00:07:22.027 Relative Read Latency: 0 00:07:22.027 Relative Write Throughput: 0 00:07:22.027 Relative Write Latency: 0 00:07:22.286 Idle Power: Not Reported 00:07:22.286 Active Power: Not Reported 00:07:22.286 Non-Operational Permissive Mode: Not Supported 00:07:22.286 00:07:22.286 Health Information 00:07:22.286 ================== 00:07:22.286 Critical Warnings: 00:07:22.286 Available Spare Space: OK 00:07:22.286 Temperature: OK 00:07:22.286 Device Reliability: OK 00:07:22.286 Read Only: No 00:07:22.286 Volatile Memory Backup: OK 00:07:22.286 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.286 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.286 Available Spare: 0% 00:07:22.286 Available Spare Threshold: 0% 00:07:22.286 Life Percentage Used: 0% 00:07:22.286 Data Units Read: 1055 00:07:22.286 Data Units Written: 922 00:07:22.286 Host Read Commands: 55100 00:07:22.286 Host Write Commands: 53896 00:07:22.286 Controller Busy Time: 0 minutes 00:07:22.286 Power Cycles: 0 00:07:22.286 Power On Hours: 0 hours 00:07:22.286 Unsafe Shutdowns: 0 00:07:22.286 Unrecoverable Media Errors: 0 00:07:22.286 Lifetime Error Log Entries: 0 00:07:22.286 Warning Temperature Time: 0 minutes 00:07:22.286 Critical Temperature Time: 0 minutes 00:07:22.286 00:07:22.286 Number of Queues 00:07:22.286 ================ 00:07:22.286 Number of I/O Submission Queues: 64 00:07:22.286 Number of I/O Completion Queues: 64 00:07:22.286 00:07:22.286 ZNS Specific Controller Data 00:07:22.286 ============================ 00:07:22.286 Zone Append Size Limit: 0 00:07:22.286 00:07:22.286 00:07:22.286 Active Namespaces 00:07:22.286 ================= 00:07:22.287 Namespace ID:1 00:07:22.287 Error Recovery Timeout: Unlimited 00:07:22.287 Command Set Identifier: NVM (00h) 00:07:22.287 Deallocate: Supported 00:07:22.287 Deallocated/Unwritten Error: Supported 00:07:22.287 Deallocated Read Value: All 0x00 00:07:22.287 Deallocate in Write Zeroes: Not Supported 00:07:22.287 Deallocated Guard Field: 0xFFFF 00:07:22.287 Flush: Supported 00:07:22.287 Reservation: Not Supported 00:07:22.287 Namespace Sharing Capabilities: Private 00:07:22.287 Size (in LBAs): 1310720 (5GiB) 00:07:22.287 Capacity (in LBAs): 1310720 (5GiB) 00:07:22.287 Utilization (in LBAs): 1310720 (5GiB) 00:07:22.287 Thin Provisioning: Not Supported 00:07:22.287 Per-NS Atomic Units: No 00:07:22.287 Maximum Single Source Range Length: 128 00:07:22.287 Maximum Copy Length: 128 00:07:22.287 Maximum Source Range Count: 128 00:07:22.287 NGUID/EUI64 Never Reused: No 00:07:22.287 Namespace Write Protected: No 00:07:22.287 Number of LBA Formats: 8 00:07:22.287 Current LBA Format: LBA Format #04 00:07:22.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.287 00:07:22.287 NVM Specific Namespace Data 00:07:22.287 =========================== 00:07:22.287 Logical Block Storage Tag Mask: 0 00:07:22.287 Protection Information Capabilities: 00:07:22.287 16b Guard Protection Information Storage Tag Support: No 00:07:22.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.287 Storage Tag Check Read Support: No 00:07:22.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.287 04:27:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:22.287 04:27:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:22.287 ===================================================== 00:07:22.287 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:22.287 ===================================================== 00:07:22.287 Controller Capabilities/Features 00:07:22.287 ================================ 00:07:22.287 Vendor ID: 1b36 00:07:22.287 Subsystem Vendor ID: 1af4 00:07:22.287 Serial Number: 12342 00:07:22.287 Model Number: QEMU NVMe Ctrl 00:07:22.287 Firmware Version: 8.0.0 00:07:22.287 Recommended Arb Burst: 6 00:07:22.287 IEEE OUI Identifier: 00 54 52 00:07:22.287 Multi-path I/O 00:07:22.287 May have multiple subsystem ports: No 00:07:22.287 May have multiple controllers: No 00:07:22.287 Associated with SR-IOV VF: No 00:07:22.287 Max Data Transfer Size: 524288 00:07:22.287 Max Number of Namespaces: 256 00:07:22.287 Max Number of I/O Queues: 64 00:07:22.287 NVMe Specification Version (VS): 1.4 00:07:22.287 NVMe Specification Version (Identify): 1.4 00:07:22.287 Maximum Queue Entries: 2048 00:07:22.287 Contiguous Queues Required: Yes 00:07:22.287 Arbitration Mechanisms Supported 00:07:22.287 Weighted Round Robin: Not Supported 00:07:22.287 Vendor Specific: Not Supported 00:07:22.287 Reset Timeout: 7500 ms 00:07:22.287 Doorbell Stride: 4 bytes 00:07:22.287 NVM Subsystem Reset: Not Supported 00:07:22.287 Command Sets Supported 00:07:22.287 NVM Command Set: Supported 00:07:22.287 Boot Partition: Not Supported 00:07:22.287 Memory Page Size Minimum: 4096 bytes 00:07:22.287 Memory Page Size Maximum: 65536 bytes 00:07:22.287 Persistent Memory Region: Not Supported 00:07:22.287 Optional Asynchronous Events Supported 00:07:22.287 Namespace Attribute Notices: Supported 00:07:22.287 Firmware Activation Notices: Not Supported 00:07:22.287 ANA Change Notices: Not Supported 00:07:22.287 PLE Aggregate Log Change Notices: Not Supported 00:07:22.287 LBA Status Info Alert Notices: Not Supported 00:07:22.287 EGE Aggregate Log Change Notices: Not Supported 00:07:22.287 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.287 Zone Descriptor Change Notices: Not Supported 00:07:22.287 Discovery Log Change Notices: Not Supported 00:07:22.287 Controller Attributes 00:07:22.287 128-bit Host Identifier: Not Supported 00:07:22.287 Non-Operational Permissive Mode: Not Supported 00:07:22.287 NVM Sets: Not Supported 00:07:22.287 Read Recovery Levels: Not Supported 00:07:22.287 Endurance Groups: Not Supported 00:07:22.287 Predictable Latency Mode: Not Supported 00:07:22.287 Traffic Based Keep ALive: Not Supported 00:07:22.287 Namespace Granularity: Not Supported 00:07:22.287 SQ Associations: Not Supported 00:07:22.287 UUID List: Not Supported 00:07:22.287 Multi-Domain Subsystem: Not Supported 00:07:22.287 Fixed Capacity Management: Not Supported 00:07:22.287 Variable Capacity Management: Not Supported 00:07:22.287 Delete Endurance Group: Not Supported 00:07:22.287 Delete NVM Set: Not Supported 00:07:22.287 Extended LBA Formats Supported: Supported 00:07:22.287 Flexible Data Placement Supported: Not Supported 00:07:22.287 00:07:22.287 Controller Memory Buffer Support 00:07:22.287 ================================ 00:07:22.287 Supported: No 00:07:22.287 00:07:22.287 Persistent Memory Region Support 00:07:22.287 ================================ 00:07:22.287 Supported: No 00:07:22.287 00:07:22.287 Admin Command Set Attributes 00:07:22.287 ============================ 00:07:22.287 Security Send/Receive: Not Supported 00:07:22.287 Format NVM: Supported 00:07:22.287 Firmware Activate/Download: Not Supported 00:07:22.287 Namespace Management: Supported 00:07:22.287 Device Self-Test: Not Supported 00:07:22.287 Directives: Supported 00:07:22.287 NVMe-MI: Not Supported 00:07:22.287 Virtualization Management: Not Supported 00:07:22.287 Doorbell Buffer Config: Supported 00:07:22.287 Get LBA Status Capability: Not Supported 00:07:22.287 Command & Feature Lockdown Capability: Not Supported 00:07:22.287 Abort Command Limit: 4 00:07:22.287 Async Event Request Limit: 4 00:07:22.287 Number of Firmware Slots: N/A 00:07:22.287 Firmware Slot 1 Read-Only: N/A 00:07:22.287 Firmware Activation Without Reset: N/A 00:07:22.287 Multiple Update Detection Support: N/A 00:07:22.287 Firmware Update Granularity: No Information Provided 00:07:22.287 Per-Namespace SMART Log: Yes 00:07:22.287 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.287 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:22.287 Command Effects Log Page: Supported 00:07:22.287 Get Log Page Extended Data: Supported 00:07:22.287 Telemetry Log Pages: Not Supported 00:07:22.287 Persistent Event Log Pages: Not Supported 00:07:22.287 Supported Log Pages Log Page: May Support 00:07:22.287 Commands Supported & Effects Log Page: Not Supported 00:07:22.287 Feature Identifiers & Effects Log Page:May Support 00:07:22.287 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.287 Data Area 4 for Telemetry Log: Not Supported 00:07:22.287 Error Log Page Entries Supported: 1 00:07:22.287 Keep Alive: Not Supported 00:07:22.287 00:07:22.287 NVM Command Set Attributes 00:07:22.287 ========================== 00:07:22.287 Submission Queue Entry Size 00:07:22.287 Max: 64 00:07:22.287 Min: 64 00:07:22.287 Completion Queue Entry Size 00:07:22.287 Max: 16 00:07:22.287 Min: 16 00:07:22.287 Number of Namespaces: 256 00:07:22.287 Compare Command: Supported 00:07:22.287 Write Uncorrectable Command: Not Supported 00:07:22.287 Dataset Management Command: Supported 00:07:22.287 Write Zeroes Command: Supported 00:07:22.287 Set Features Save Field: Supported 00:07:22.287 Reservations: Not Supported 00:07:22.287 Timestamp: Supported 00:07:22.287 Copy: Supported 00:07:22.287 Volatile Write Cache: Present 00:07:22.287 Atomic Write Unit (Normal): 1 00:07:22.287 Atomic Write Unit (PFail): 1 00:07:22.287 Atomic Compare & Write Unit: 1 00:07:22.287 Fused Compare & Write: Not Supported 00:07:22.287 Scatter-Gather List 00:07:22.287 SGL Command Set: Supported 00:07:22.287 SGL Keyed: Not Supported 00:07:22.287 SGL Bit Bucket Descriptor: Not Supported 00:07:22.287 SGL Metadata Pointer: Not Supported 00:07:22.287 Oversized SGL: Not Supported 00:07:22.287 SGL Metadata Address: Not Supported 00:07:22.287 SGL Offset: Not Supported 00:07:22.287 Transport SGL Data Block: Not Supported 00:07:22.287 Replay Protected Memory Block: Not Supported 00:07:22.287 00:07:22.287 Firmware Slot Information 00:07:22.288 ========================= 00:07:22.288 Active slot: 1 00:07:22.288 Slot 1 Firmware Revision: 1.0 00:07:22.288 00:07:22.288 00:07:22.288 Commands Supported and Effects 00:07:22.288 ============================== 00:07:22.288 Admin Commands 00:07:22.288 -------------- 00:07:22.288 Delete I/O Submission Queue (00h): Supported 00:07:22.288 Create I/O Submission Queue (01h): Supported 00:07:22.288 Get Log Page (02h): Supported 00:07:22.288 Delete I/O Completion Queue (04h): Supported 00:07:22.288 Create I/O Completion Queue (05h): Supported 00:07:22.288 Identify (06h): Supported 00:07:22.288 Abort (08h): Supported 00:07:22.288 Set Features (09h): Supported 00:07:22.288 Get Features (0Ah): Supported 00:07:22.288 Asynchronous Event Request (0Ch): Supported 00:07:22.288 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.288 Directive Send (19h): Supported 00:07:22.288 Directive Receive (1Ah): Supported 00:07:22.288 Virtualization Management (1Ch): Supported 00:07:22.288 Doorbell Buffer Config (7Ch): Supported 00:07:22.288 Format NVM (80h): Supported LBA-Change 00:07:22.288 I/O Commands 00:07:22.288 ------------ 00:07:22.288 Flush (00h): Supported LBA-Change 00:07:22.288 Write (01h): Supported LBA-Change 00:07:22.288 Read (02h): Supported 00:07:22.288 Compare (05h): Supported 00:07:22.288 Write Zeroes (08h): Supported LBA-Change 00:07:22.288 Dataset Management (09h): Supported LBA-Change 00:07:22.288 Unknown (0Ch): Supported 00:07:22.288 Unknown (12h): Supported 00:07:22.288 Copy (19h): Supported LBA-Change 00:07:22.288 Unknown (1Dh): Supported LBA-Change 00:07:22.288 00:07:22.288 Error Log 00:07:22.288 ========= 00:07:22.288 00:07:22.288 Arbitration 00:07:22.288 =========== 00:07:22.288 Arbitration Burst: no limit 00:07:22.288 00:07:22.288 Power Management 00:07:22.288 ================ 00:07:22.288 Number of Power States: 1 00:07:22.288 Current Power State: Power State #0 00:07:22.288 Power State #0: 00:07:22.288 Max Power: 25.00 W 00:07:22.288 Non-Operational State: Operational 00:07:22.288 Entry Latency: 16 microseconds 00:07:22.288 Exit Latency: 4 microseconds 00:07:22.288 Relative Read Throughput: 0 00:07:22.288 Relative Read Latency: 0 00:07:22.288 Relative Write Throughput: 0 00:07:22.288 Relative Write Latency: 0 00:07:22.288 Idle Power: Not Reported 00:07:22.288 Active Power: Not Reported 00:07:22.288 Non-Operational Permissive Mode: Not Supported 00:07:22.288 00:07:22.288 Health Information 00:07:22.288 ================== 00:07:22.288 Critical Warnings: 00:07:22.288 Available Spare Space: OK 00:07:22.288 Temperature: OK 00:07:22.288 Device Reliability: OK 00:07:22.288 Read Only: No 00:07:22.288 Volatile Memory Backup: OK 00:07:22.288 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.288 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.288 Available Spare: 0% 00:07:22.288 Available Spare Threshold: 0% 00:07:22.288 Life Percentage Used: 0% 00:07:22.288 Data Units Read: 2076 00:07:22.288 Data Units Written: 1863 00:07:22.288 Host Read Commands: 112215 00:07:22.288 Host Write Commands: 110484 00:07:22.288 Controller Busy Time: 0 minutes 00:07:22.288 Power Cycles: 0 00:07:22.288 Power On Hours: 0 hours 00:07:22.288 Unsafe Shutdowns: 0 00:07:22.288 Unrecoverable Media Errors: 0 00:07:22.288 Lifetime Error Log Entries: 0 00:07:22.288 Warning Temperature Time: 0 minutes 00:07:22.288 Critical Temperature Time: 0 minutes 00:07:22.288 00:07:22.288 Number of Queues 00:07:22.288 ================ 00:07:22.288 Number of I/O Submission Queues: 64 00:07:22.288 Number of I/O Completion Queues: 64 00:07:22.288 00:07:22.288 ZNS Specific Controller Data 00:07:22.288 ============================ 00:07:22.288 Zone Append Size Limit: 0 00:07:22.288 00:07:22.288 00:07:22.288 Active Namespaces 00:07:22.288 ================= 00:07:22.288 Namespace ID:1 00:07:22.288 Error Recovery Timeout: Unlimited 00:07:22.288 Command Set Identifier: NVM (00h) 00:07:22.288 Deallocate: Supported 00:07:22.288 Deallocated/Unwritten Error: Supported 00:07:22.288 Deallocated Read Value: All 0x00 00:07:22.288 Deallocate in Write Zeroes: Not Supported 00:07:22.288 Deallocated Guard Field: 0xFFFF 00:07:22.288 Flush: Supported 00:07:22.288 Reservation: Not Supported 00:07:22.288 Namespace Sharing Capabilities: Private 00:07:22.288 Size (in LBAs): 1048576 (4GiB) 00:07:22.288 Capacity (in LBAs): 1048576 (4GiB) 00:07:22.288 Utilization (in LBAs): 1048576 (4GiB) 00:07:22.288 Thin Provisioning: Not Supported 00:07:22.288 Per-NS Atomic Units: No 00:07:22.288 Maximum Single Source Range Length: 128 00:07:22.288 Maximum Copy Length: 128 00:07:22.288 Maximum Source Range Count: 128 00:07:22.288 NGUID/EUI64 Never Reused: No 00:07:22.288 Namespace Write Protected: No 00:07:22.288 Number of LBA Formats: 8 00:07:22.288 Current LBA Format: LBA Format #04 00:07:22.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.288 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.288 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.288 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.288 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.288 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.288 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.288 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.288 00:07:22.288 NVM Specific Namespace Data 00:07:22.288 =========================== 00:07:22.288 Logical Block Storage Tag Mask: 0 00:07:22.288 Protection Information Capabilities: 00:07:22.288 16b Guard Protection Information Storage Tag Support: No 00:07:22.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.288 Storage Tag Check Read Support: No 00:07:22.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Namespace ID:2 00:07:22.288 Error Recovery Timeout: Unlimited 00:07:22.288 Command Set Identifier: NVM (00h) 00:07:22.288 Deallocate: Supported 00:07:22.288 Deallocated/Unwritten Error: Supported 00:07:22.288 Deallocated Read Value: All 0x00 00:07:22.288 Deallocate in Write Zeroes: Not Supported 00:07:22.288 Deallocated Guard Field: 0xFFFF 00:07:22.288 Flush: Supported 00:07:22.288 Reservation: Not Supported 00:07:22.288 Namespace Sharing Capabilities: Private 00:07:22.288 Size (in LBAs): 1048576 (4GiB) 00:07:22.288 Capacity (in LBAs): 1048576 (4GiB) 00:07:22.288 Utilization (in LBAs): 1048576 (4GiB) 00:07:22.288 Thin Provisioning: Not Supported 00:07:22.288 Per-NS Atomic Units: No 00:07:22.288 Maximum Single Source Range Length: 128 00:07:22.288 Maximum Copy Length: 128 00:07:22.288 Maximum Source Range Count: 128 00:07:22.288 NGUID/EUI64 Never Reused: No 00:07:22.288 Namespace Write Protected: No 00:07:22.288 Number of LBA Formats: 8 00:07:22.288 Current LBA Format: LBA Format #04 00:07:22.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.288 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.288 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.288 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.288 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.288 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.288 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.288 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.288 00:07:22.288 NVM Specific Namespace Data 00:07:22.288 =========================== 00:07:22.288 Logical Block Storage Tag Mask: 0 00:07:22.288 Protection Information Capabilities: 00:07:22.288 16b Guard Protection Information Storage Tag Support: No 00:07:22.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.288 Storage Tag Check Read Support: No 00:07:22.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.288 Namespace ID:3 00:07:22.288 Error Recovery Timeout: Unlimited 00:07:22.288 Command Set Identifier: NVM (00h) 00:07:22.288 Deallocate: Supported 00:07:22.288 Deallocated/Unwritten Error: Supported 00:07:22.289 Deallocated Read Value: All 0x00 00:07:22.289 Deallocate in Write Zeroes: Not Supported 00:07:22.289 Deallocated Guard Field: 0xFFFF 00:07:22.289 Flush: Supported 00:07:22.289 Reservation: Not Supported 00:07:22.289 Namespace Sharing Capabilities: Private 00:07:22.289 Size (in LBAs): 1048576 (4GiB) 00:07:22.289 Capacity (in LBAs): 1048576 (4GiB) 00:07:22.289 Utilization (in LBAs): 1048576 (4GiB) 00:07:22.289 Thin Provisioning: Not Supported 00:07:22.289 Per-NS Atomic Units: No 00:07:22.289 Maximum Single Source Range Length: 128 00:07:22.289 Maximum Copy Length: 128 00:07:22.289 Maximum Source Range Count: 128 00:07:22.289 NGUID/EUI64 Never Reused: No 00:07:22.289 Namespace Write Protected: No 00:07:22.289 Number of LBA Formats: 8 00:07:22.289 Current LBA Format: LBA Format #04 00:07:22.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.289 00:07:22.289 NVM Specific Namespace Data 00:07:22.289 =========================== 00:07:22.289 Logical Block Storage Tag Mask: 0 00:07:22.289 Protection Information Capabilities: 00:07:22.289 16b Guard Protection Information Storage Tag Support: No 00:07:22.289 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.289 Storage Tag Check Read Support: No 00:07:22.289 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.289 04:27:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:22.289 04:27:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:22.547 ===================================================== 00:07:22.547 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:22.547 ===================================================== 00:07:22.547 Controller Capabilities/Features 00:07:22.547 ================================ 00:07:22.547 Vendor ID: 1b36 00:07:22.547 Subsystem Vendor ID: 1af4 00:07:22.547 Serial Number: 12343 00:07:22.547 Model Number: QEMU NVMe Ctrl 00:07:22.547 Firmware Version: 8.0.0 00:07:22.547 Recommended Arb Burst: 6 00:07:22.547 IEEE OUI Identifier: 00 54 52 00:07:22.547 Multi-path I/O 00:07:22.547 May have multiple subsystem ports: No 00:07:22.547 May have multiple controllers: Yes 00:07:22.547 Associated with SR-IOV VF: No 00:07:22.547 Max Data Transfer Size: 524288 00:07:22.547 Max Number of Namespaces: 256 00:07:22.547 Max Number of I/O Queues: 64 00:07:22.547 NVMe Specification Version (VS): 1.4 00:07:22.547 NVMe Specification Version (Identify): 1.4 00:07:22.547 Maximum Queue Entries: 2048 00:07:22.547 Contiguous Queues Required: Yes 00:07:22.547 Arbitration Mechanisms Supported 00:07:22.547 Weighted Round Robin: Not Supported 00:07:22.547 Vendor Specific: Not Supported 00:07:22.547 Reset Timeout: 7500 ms 00:07:22.547 Doorbell Stride: 4 bytes 00:07:22.547 NVM Subsystem Reset: Not Supported 00:07:22.547 Command Sets Supported 00:07:22.547 NVM Command Set: Supported 00:07:22.547 Boot Partition: Not Supported 00:07:22.547 Memory Page Size Minimum: 4096 bytes 00:07:22.547 Memory Page Size Maximum: 65536 bytes 00:07:22.547 Persistent Memory Region: Not Supported 00:07:22.547 Optional Asynchronous Events Supported 00:07:22.547 Namespace Attribute Notices: Supported 00:07:22.547 Firmware Activation Notices: Not Supported 00:07:22.547 ANA Change Notices: Not Supported 00:07:22.547 PLE Aggregate Log Change Notices: Not Supported 00:07:22.547 LBA Status Info Alert Notices: Not Supported 00:07:22.547 EGE Aggregate Log Change Notices: Not Supported 00:07:22.547 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.547 Zone Descriptor Change Notices: Not Supported 00:07:22.547 Discovery Log Change Notices: Not Supported 00:07:22.547 Controller Attributes 00:07:22.547 128-bit Host Identifier: Not Supported 00:07:22.547 Non-Operational Permissive Mode: Not Supported 00:07:22.547 NVM Sets: Not Supported 00:07:22.547 Read Recovery Levels: Not Supported 00:07:22.547 Endurance Groups: Supported 00:07:22.547 Predictable Latency Mode: Not Supported 00:07:22.547 Traffic Based Keep ALive: Not Supported 00:07:22.547 Namespace Granularity: Not Supported 00:07:22.547 SQ Associations: Not Supported 00:07:22.547 UUID List: Not Supported 00:07:22.547 Multi-Domain Subsystem: Not Supported 00:07:22.547 Fixed Capacity Management: Not Supported 00:07:22.547 Variable Capacity Management: Not Supported 00:07:22.547 Delete Endurance Group: Not Supported 00:07:22.547 Delete NVM Set: Not Supported 00:07:22.547 Extended LBA Formats Supported: Supported 00:07:22.547 Flexible Data Placement Supported: Supported 00:07:22.547 00:07:22.547 Controller Memory Buffer Support 00:07:22.547 ================================ 00:07:22.547 Supported: No 00:07:22.547 00:07:22.547 Persistent Memory Region Support 00:07:22.547 ================================ 00:07:22.547 Supported: No 00:07:22.547 00:07:22.547 Admin Command Set Attributes 00:07:22.547 ============================ 00:07:22.547 Security Send/Receive: Not Supported 00:07:22.547 Format NVM: Supported 00:07:22.547 Firmware Activate/Download: Not Supported 00:07:22.547 Namespace Management: Supported 00:07:22.547 Device Self-Test: Not Supported 00:07:22.547 Directives: Supported 00:07:22.547 NVMe-MI: Not Supported 00:07:22.548 Virtualization Management: Not Supported 00:07:22.548 Doorbell Buffer Config: Supported 00:07:22.548 Get LBA Status Capability: Not Supported 00:07:22.548 Command & Feature Lockdown Capability: Not Supported 00:07:22.548 Abort Command Limit: 4 00:07:22.548 Async Event Request Limit: 4 00:07:22.548 Number of Firmware Slots: N/A 00:07:22.548 Firmware Slot 1 Read-Only: N/A 00:07:22.548 Firmware Activation Without Reset: N/A 00:07:22.548 Multiple Update Detection Support: N/A 00:07:22.548 Firmware Update Granularity: No Information Provided 00:07:22.548 Per-Namespace SMART Log: Yes 00:07:22.548 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.548 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:22.548 Command Effects Log Page: Supported 00:07:22.548 Get Log Page Extended Data: Supported 00:07:22.548 Telemetry Log Pages: Not Supported 00:07:22.548 Persistent Event Log Pages: Not Supported 00:07:22.548 Supported Log Pages Log Page: May Support 00:07:22.548 Commands Supported & Effects Log Page: Not Supported 00:07:22.548 Feature Identifiers & Effects Log Page:May Support 00:07:22.548 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.548 Data Area 4 for Telemetry Log: Not Supported 00:07:22.548 Error Log Page Entries Supported: 1 00:07:22.548 Keep Alive: Not Supported 00:07:22.548 00:07:22.548 NVM Command Set Attributes 00:07:22.548 ========================== 00:07:22.548 Submission Queue Entry Size 00:07:22.548 Max: 64 00:07:22.548 Min: 64 00:07:22.548 Completion Queue Entry Size 00:07:22.548 Max: 16 00:07:22.548 Min: 16 00:07:22.548 Number of Namespaces: 256 00:07:22.548 Compare Command: Supported 00:07:22.548 Write Uncorrectable Command: Not Supported 00:07:22.548 Dataset Management Command: Supported 00:07:22.548 Write Zeroes Command: Supported 00:07:22.548 Set Features Save Field: Supported 00:07:22.548 Reservations: Not Supported 00:07:22.548 Timestamp: Supported 00:07:22.548 Copy: Supported 00:07:22.548 Volatile Write Cache: Present 00:07:22.548 Atomic Write Unit (Normal): 1 00:07:22.548 Atomic Write Unit (PFail): 1 00:07:22.548 Atomic Compare & Write Unit: 1 00:07:22.548 Fused Compare & Write: Not Supported 00:07:22.548 Scatter-Gather List 00:07:22.548 SGL Command Set: Supported 00:07:22.548 SGL Keyed: Not Supported 00:07:22.548 SGL Bit Bucket Descriptor: Not Supported 00:07:22.548 SGL Metadata Pointer: Not Supported 00:07:22.548 Oversized SGL: Not Supported 00:07:22.548 SGL Metadata Address: Not Supported 00:07:22.548 SGL Offset: Not Supported 00:07:22.548 Transport SGL Data Block: Not Supported 00:07:22.548 Replay Protected Memory Block: Not Supported 00:07:22.548 00:07:22.548 Firmware Slot Information 00:07:22.548 ========================= 00:07:22.548 Active slot: 1 00:07:22.548 Slot 1 Firmware Revision: 1.0 00:07:22.548 00:07:22.548 00:07:22.548 Commands Supported and Effects 00:07:22.548 ============================== 00:07:22.548 Admin Commands 00:07:22.548 -------------- 00:07:22.548 Delete I/O Submission Queue (00h): Supported 00:07:22.548 Create I/O Submission Queue (01h): Supported 00:07:22.548 Get Log Page (02h): Supported 00:07:22.548 Delete I/O Completion Queue (04h): Supported 00:07:22.548 Create I/O Completion Queue (05h): Supported 00:07:22.548 Identify (06h): Supported 00:07:22.548 Abort (08h): Supported 00:07:22.548 Set Features (09h): Supported 00:07:22.548 Get Features (0Ah): Supported 00:07:22.548 Asynchronous Event Request (0Ch): Supported 00:07:22.548 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.548 Directive Send (19h): Supported 00:07:22.548 Directive Receive (1Ah): Supported 00:07:22.548 Virtualization Management (1Ch): Supported 00:07:22.548 Doorbell Buffer Config (7Ch): Supported 00:07:22.548 Format NVM (80h): Supported LBA-Change 00:07:22.548 I/O Commands 00:07:22.548 ------------ 00:07:22.548 Flush (00h): Supported LBA-Change 00:07:22.548 Write (01h): Supported LBA-Change 00:07:22.548 Read (02h): Supported 00:07:22.548 Compare (05h): Supported 00:07:22.548 Write Zeroes (08h): Supported LBA-Change 00:07:22.548 Dataset Management (09h): Supported LBA-Change 00:07:22.548 Unknown (0Ch): Supported 00:07:22.548 Unknown (12h): Supported 00:07:22.548 Copy (19h): Supported LBA-Change 00:07:22.548 Unknown (1Dh): Supported LBA-Change 00:07:22.548 00:07:22.548 Error Log 00:07:22.548 ========= 00:07:22.548 00:07:22.548 Arbitration 00:07:22.548 =========== 00:07:22.548 Arbitration Burst: no limit 00:07:22.548 00:07:22.548 Power Management 00:07:22.548 ================ 00:07:22.548 Number of Power States: 1 00:07:22.548 Current Power State: Power State #0 00:07:22.548 Power State #0: 00:07:22.548 Max Power: 25.00 W 00:07:22.548 Non-Operational State: Operational 00:07:22.548 Entry Latency: 16 microseconds 00:07:22.548 Exit Latency: 4 microseconds 00:07:22.548 Relative Read Throughput: 0 00:07:22.548 Relative Read Latency: 0 00:07:22.548 Relative Write Throughput: 0 00:07:22.548 Relative Write Latency: 0 00:07:22.548 Idle Power: Not Reported 00:07:22.548 Active Power: Not Reported 00:07:22.548 Non-Operational Permissive Mode: Not Supported 00:07:22.548 00:07:22.548 Health Information 00:07:22.548 ================== 00:07:22.548 Critical Warnings: 00:07:22.548 Available Spare Space: OK 00:07:22.548 Temperature: OK 00:07:22.548 Device Reliability: OK 00:07:22.548 Read Only: No 00:07:22.548 Volatile Memory Backup: OK 00:07:22.548 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.548 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.548 Available Spare: 0% 00:07:22.548 Available Spare Threshold: 0% 00:07:22.548 Life Percentage Used: 0% 00:07:22.548 Data Units Read: 766 00:07:22.548 Data Units Written: 695 00:07:22.548 Host Read Commands: 38044 00:07:22.548 Host Write Commands: 37467 00:07:22.548 Controller Busy Time: 0 minutes 00:07:22.548 Power Cycles: 0 00:07:22.548 Power On Hours: 0 hours 00:07:22.548 Unsafe Shutdowns: 0 00:07:22.548 Unrecoverable Media Errors: 0 00:07:22.548 Lifetime Error Log Entries: 0 00:07:22.548 Warning Temperature Time: 0 minutes 00:07:22.548 Critical Temperature Time: 0 minutes 00:07:22.548 00:07:22.548 Number of Queues 00:07:22.548 ================ 00:07:22.548 Number of I/O Submission Queues: 64 00:07:22.548 Number of I/O Completion Queues: 64 00:07:22.548 00:07:22.548 ZNS Specific Controller Data 00:07:22.548 ============================ 00:07:22.548 Zone Append Size Limit: 0 00:07:22.548 00:07:22.548 00:07:22.548 Active Namespaces 00:07:22.548 ================= 00:07:22.548 Namespace ID:1 00:07:22.548 Error Recovery Timeout: Unlimited 00:07:22.548 Command Set Identifier: NVM (00h) 00:07:22.548 Deallocate: Supported 00:07:22.548 Deallocated/Unwritten Error: Supported 00:07:22.548 Deallocated Read Value: All 0x00 00:07:22.548 Deallocate in Write Zeroes: Not Supported 00:07:22.548 Deallocated Guard Field: 0xFFFF 00:07:22.548 Flush: Supported 00:07:22.548 Reservation: Not Supported 00:07:22.548 Namespace Sharing Capabilities: Multiple Controllers 00:07:22.548 Size (in LBAs): 262144 (1GiB) 00:07:22.548 Capacity (in LBAs): 262144 (1GiB) 00:07:22.548 Utilization (in LBAs): 262144 (1GiB) 00:07:22.548 Thin Provisioning: Not Supported 00:07:22.548 Per-NS Atomic Units: No 00:07:22.548 Maximum Single Source Range Length: 128 00:07:22.548 Maximum Copy Length: 128 00:07:22.548 Maximum Source Range Count: 128 00:07:22.548 NGUID/EUI64 Never Reused: No 00:07:22.548 Namespace Write Protected: No 00:07:22.548 Endurance group ID: 1 00:07:22.548 Number of LBA Formats: 8 00:07:22.548 Current LBA Format: LBA Format #04 00:07:22.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.548 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.548 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.548 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.548 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.548 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.548 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.548 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.548 00:07:22.548 Get Feature FDP: 00:07:22.548 ================ 00:07:22.548 Enabled: Yes 00:07:22.548 FDP configuration index: 0 00:07:22.548 00:07:22.548 FDP configurations log page 00:07:22.548 =========================== 00:07:22.548 Number of FDP configurations: 1 00:07:22.548 Version: 0 00:07:22.548 Size: 112 00:07:22.548 FDP Configuration Descriptor: 0 00:07:22.548 Descriptor Size: 96 00:07:22.548 Reclaim Group Identifier format: 2 00:07:22.548 FDP Volatile Write Cache: Not Present 00:07:22.548 FDP Configuration: Valid 00:07:22.548 Vendor Specific Size: 0 00:07:22.548 Number of Reclaim Groups: 2 00:07:22.548 Number of Recalim Unit Handles: 8 00:07:22.548 Max Placement Identifiers: 128 00:07:22.548 Number of Namespaces Suppprted: 256 00:07:22.548 Reclaim unit Nominal Size: 6000000 bytes 00:07:22.548 Estimated Reclaim Unit Time Limit: Not Reported 00:07:22.549 RUH Desc #000: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #001: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #002: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #003: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #004: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #005: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #006: RUH Type: Initially Isolated 00:07:22.549 RUH Desc #007: RUH Type: Initially Isolated 00:07:22.549 00:07:22.549 FDP reclaim unit handle usage log page 00:07:22.549 ====================================== 00:07:22.549 Number of Reclaim Unit Handles: 8 00:07:22.549 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:22.549 RUH Usage Desc #001: RUH Attributes: Unused 00:07:22.549 RUH Usage Desc #002: RUH Attributes: Unused 00:07:22.549 RUH Usage Desc #003: RUH Attributes: Unused 00:07:22.549 RUH Usage Desc #004: RUH Attributes: Unused 00:07:22.549 RUH Usage Desc #005: RUH Attributes: Unused 00:07:22.549 RUH Usage Desc #006: RUH Attributes: Unused 00:07:22.549 RUH Usage Desc #007: RUH Attributes: Unused 00:07:22.549 00:07:22.549 FDP statistics log page 00:07:22.549 ======================= 00:07:22.549 Host bytes with metadata written: 445947904 00:07:22.549 Media bytes with metadata written: 445992960 00:07:22.549 Media bytes erased: 0 00:07:22.549 00:07:22.549 FDP events log page 00:07:22.549 =================== 00:07:22.549 Number of FDP events: 0 00:07:22.549 00:07:22.549 NVM Specific Namespace Data 00:07:22.549 =========================== 00:07:22.549 Logical Block Storage Tag Mask: 0 00:07:22.549 Protection Information Capabilities: 00:07:22.549 16b Guard Protection Information Storage Tag Support: No 00:07:22.549 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.549 Storage Tag Check Read Support: No 00:07:22.549 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.549 ************************************ 00:07:22.549 END TEST nvme_identify 00:07:22.549 ************************************ 00:07:22.549 00:07:22.549 real 0m1.171s 00:07:22.549 user 0m0.447s 00:07:22.549 sys 0m0.519s 00:07:22.549 04:27:45 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.549 04:27:45 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:22.549 04:27:45 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:22.549 04:27:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.549 04:27:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.549 04:27:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.549 ************************************ 00:07:22.549 START TEST nvme_perf 00:07:22.549 ************************************ 00:07:22.549 04:27:45 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:07:22.549 04:27:45 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:23.924 Initializing NVMe Controllers 00:07:23.924 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:23.924 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:23.924 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:23.924 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:23.924 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:23.924 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:23.924 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:23.924 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:23.924 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:23.924 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:23.924 Initialization complete. Launching workers. 00:07:23.924 ======================================================== 00:07:23.924 Latency(us) 00:07:23.924 Device Information : IOPS MiB/s Average min max 00:07:23.924 PCIE (0000:00:10.0) NSID 1 from core 0: 18537.01 217.23 6915.68 5590.27 36210.32 00:07:23.924 PCIE (0000:00:11.0) NSID 1 from core 0: 18600.93 217.98 6883.10 5655.65 31189.40 00:07:23.924 PCIE (0000:00:13.0) NSID 1 from core 0: 18600.93 217.98 6873.12 5633.77 29856.15 00:07:23.924 PCIE (0000:00:12.0) NSID 1 from core 0: 18600.93 217.98 6863.04 5669.62 28041.18 00:07:23.924 PCIE (0000:00:12.0) NSID 2 from core 0: 18600.93 217.98 6853.01 5641.76 26247.75 00:07:23.924 PCIE (0000:00:12.0) NSID 3 from core 0: 18600.93 217.98 6843.07 5641.33 24468.83 00:07:23.924 ======================================================== 00:07:23.924 Total : 111541.69 1307.13 6871.81 5590.27 36210.32 00:07:23.924 00:07:23.924 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:23.924 ================================================================================= 00:07:23.924 1.00000% : 5696.591us 00:07:23.924 10.00000% : 5923.446us 00:07:23.924 25.00000% : 6200.714us 00:07:23.924 50.00000% : 6503.188us 00:07:23.924 75.00000% : 6906.486us 00:07:23.924 90.00000% : 7461.022us 00:07:23.924 95.00000% : 9527.926us 00:07:23.924 98.00000% : 10989.883us 00:07:23.924 99.00000% : 12855.138us 00:07:23.924 99.50000% : 31255.631us 00:07:23.924 99.90000% : 35893.563us 00:07:23.924 99.99000% : 36296.862us 00:07:23.924 99.99900% : 36296.862us 00:07:23.924 99.99990% : 36296.862us 00:07:23.924 99.99999% : 36296.862us 00:07:23.924 00:07:23.924 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:23.924 ================================================================================= 00:07:23.924 1.00000% : 5772.209us 00:07:23.924 10.00000% : 5973.858us 00:07:23.924 25.00000% : 6225.920us 00:07:23.924 50.00000% : 6503.188us 00:07:23.924 75.00000% : 6856.074us 00:07:23.924 90.00000% : 7561.846us 00:07:23.924 95.00000% : 9427.102us 00:07:23.924 98.00000% : 11141.120us 00:07:23.924 99.00000% : 12754.314us 00:07:23.924 99.50000% : 26416.049us 00:07:23.924 99.90000% : 30852.332us 00:07:23.924 99.99000% : 31255.631us 00:07:23.924 99.99900% : 31255.631us 00:07:23.924 99.99990% : 31255.631us 00:07:23.924 99.99999% : 31255.631us 00:07:23.924 00:07:23.924 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:23.924 ================================================================================= 00:07:23.924 1.00000% : 5772.209us 00:07:23.924 10.00000% : 5948.652us 00:07:23.924 25.00000% : 6225.920us 00:07:23.924 50.00000% : 6503.188us 00:07:23.924 75.00000% : 6856.074us 00:07:23.924 90.00000% : 7561.846us 00:07:23.924 95.00000% : 9376.689us 00:07:23.924 98.00000% : 11241.945us 00:07:23.924 99.00000% : 12401.428us 00:07:23.924 99.50000% : 24601.206us 00:07:23.924 99.90000% : 29440.788us 00:07:23.924 99.99000% : 29844.086us 00:07:23.924 99.99900% : 30045.735us 00:07:23.924 99.99990% : 30045.735us 00:07:23.924 99.99999% : 30045.735us 00:07:23.924 00:07:23.924 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:23.924 ================================================================================= 00:07:23.924 1.00000% : 5747.003us 00:07:23.924 10.00000% : 5973.858us 00:07:23.924 25.00000% : 6225.920us 00:07:23.924 50.00000% : 6503.188us 00:07:23.924 75.00000% : 6856.074us 00:07:23.924 90.00000% : 7561.846us 00:07:23.924 95.00000% : 9376.689us 00:07:23.924 98.00000% : 11191.532us 00:07:23.924 99.00000% : 12401.428us 00:07:23.924 99.50000% : 22988.012us 00:07:23.924 99.90000% : 27625.945us 00:07:23.924 99.99000% : 28029.243us 00:07:23.924 99.99900% : 28230.892us 00:07:23.924 99.99990% : 28230.892us 00:07:23.924 99.99999% : 28230.892us 00:07:23.924 00:07:23.924 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:23.924 ================================================================================= 00:07:23.924 1.00000% : 5772.209us 00:07:23.924 10.00000% : 5973.858us 00:07:23.924 25.00000% : 6225.920us 00:07:23.925 50.00000% : 6503.188us 00:07:23.925 75.00000% : 6856.074us 00:07:23.925 90.00000% : 7561.846us 00:07:23.925 95.00000% : 9427.102us 00:07:23.925 98.00000% : 11141.120us 00:07:23.925 99.00000% : 12351.015us 00:07:23.925 99.50000% : 21173.169us 00:07:23.925 99.90000% : 25811.102us 00:07:23.925 99.99000% : 26416.049us 00:07:23.925 99.99900% : 26416.049us 00:07:23.925 99.99990% : 26416.049us 00:07:23.925 99.99999% : 26416.049us 00:07:23.925 00:07:23.925 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:23.925 ================================================================================= 00:07:23.925 1.00000% : 5772.209us 00:07:23.925 10.00000% : 5973.858us 00:07:23.925 25.00000% : 6225.920us 00:07:23.925 50.00000% : 6503.188us 00:07:23.925 75.00000% : 6856.074us 00:07:23.925 90.00000% : 7612.258us 00:07:23.925 95.00000% : 9527.926us 00:07:23.925 98.00000% : 10989.883us 00:07:23.925 99.00000% : 12703.902us 00:07:23.925 99.50000% : 19358.326us 00:07:23.925 99.90000% : 24097.083us 00:07:23.925 99.99000% : 24500.382us 00:07:23.925 99.99900% : 24500.382us 00:07:23.925 99.99990% : 24500.382us 00:07:23.925 99.99999% : 24500.382us 00:07:23.925 00:07:23.925 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:23.925 ============================================================================== 00:07:23.925 Range in us Cumulative IO count 00:07:23.925 5570.560 - 5595.766: 0.0108% ( 2) 00:07:23.925 5595.766 - 5620.972: 0.1131% ( 19) 00:07:23.925 5620.972 - 5646.178: 0.3772% ( 49) 00:07:23.925 5646.178 - 5671.385: 0.8082% ( 80) 00:07:23.925 5671.385 - 5696.591: 1.4763% ( 124) 00:07:23.925 5696.591 - 5721.797: 2.3976% ( 171) 00:07:23.925 5721.797 - 5747.003: 3.3567% ( 178) 00:07:23.925 5747.003 - 5772.209: 4.3966% ( 193) 00:07:23.925 5772.209 - 5797.415: 5.3664% ( 180) 00:07:23.925 5797.415 - 5822.622: 6.4116% ( 194) 00:07:23.925 5822.622 - 5847.828: 7.5269% ( 207) 00:07:23.925 5847.828 - 5873.034: 8.5722% ( 194) 00:07:23.925 5873.034 - 5898.240: 9.6552% ( 201) 00:07:23.925 5898.240 - 5923.446: 10.7220% ( 198) 00:07:23.925 5923.446 - 5948.652: 11.8912% ( 217) 00:07:23.925 5948.652 - 5973.858: 13.0603% ( 217) 00:07:23.925 5973.858 - 5999.065: 14.2026% ( 212) 00:07:23.925 5999.065 - 6024.271: 15.3879% ( 220) 00:07:23.925 6024.271 - 6049.477: 16.6487% ( 234) 00:07:23.925 6049.477 - 6074.683: 17.8179% ( 217) 00:07:23.925 6074.683 - 6099.889: 19.0948% ( 237) 00:07:23.925 6099.889 - 6125.095: 20.3017% ( 224) 00:07:23.925 6125.095 - 6150.302: 21.8050% ( 279) 00:07:23.925 6150.302 - 6175.508: 23.1789% ( 255) 00:07:23.925 6175.508 - 6200.714: 25.0808% ( 353) 00:07:23.925 6200.714 - 6225.920: 27.1067% ( 376) 00:07:23.925 6225.920 - 6251.126: 29.1541% ( 380) 00:07:23.925 6251.126 - 6276.332: 31.1961% ( 379) 00:07:23.925 6276.332 - 6301.538: 33.2866% ( 388) 00:07:23.925 6301.538 - 6326.745: 35.5119% ( 413) 00:07:23.925 6326.745 - 6351.951: 37.6293% ( 393) 00:07:23.925 6351.951 - 6377.157: 39.7198% ( 388) 00:07:23.925 6377.157 - 6402.363: 41.9235% ( 409) 00:07:23.925 6402.363 - 6427.569: 44.0571% ( 396) 00:07:23.925 6427.569 - 6452.775: 46.2554% ( 408) 00:07:23.925 6452.775 - 6503.188: 50.6304% ( 812) 00:07:23.925 6503.188 - 6553.600: 54.9192% ( 796) 00:07:23.925 6553.600 - 6604.012: 59.1810% ( 791) 00:07:23.925 6604.012 - 6654.425: 63.2920% ( 763) 00:07:23.925 6654.425 - 6704.837: 66.5787% ( 610) 00:07:23.925 6704.837 - 6755.249: 69.4397% ( 531) 00:07:23.925 6755.249 - 6805.662: 71.9289% ( 462) 00:07:23.925 6805.662 - 6856.074: 74.3534% ( 450) 00:07:23.925 6856.074 - 6906.486: 76.6379% ( 424) 00:07:23.925 6906.486 - 6956.898: 78.8793% ( 416) 00:07:23.925 6956.898 - 7007.311: 81.0291% ( 399) 00:07:23.925 7007.311 - 7057.723: 83.0657% ( 378) 00:07:23.925 7057.723 - 7108.135: 84.9677% ( 353) 00:07:23.925 7108.135 - 7158.548: 86.8481% ( 349) 00:07:23.925 7158.548 - 7208.960: 88.2166% ( 254) 00:07:23.925 7208.960 - 7259.372: 89.0194% ( 149) 00:07:23.925 7259.372 - 7309.785: 89.4504% ( 80) 00:07:23.925 7309.785 - 7360.197: 89.6983% ( 46) 00:07:23.925 7360.197 - 7410.609: 89.9192% ( 41) 00:07:23.925 7410.609 - 7461.022: 90.0970% ( 33) 00:07:23.925 7461.022 - 7511.434: 90.2586% ( 30) 00:07:23.925 7511.434 - 7561.846: 90.3825% ( 23) 00:07:23.925 7561.846 - 7612.258: 90.5172% ( 25) 00:07:23.925 7612.258 - 7662.671: 90.6304% ( 21) 00:07:23.925 7662.671 - 7713.083: 90.7435% ( 21) 00:07:23.925 7713.083 - 7763.495: 90.8998% ( 29) 00:07:23.925 7763.495 - 7813.908: 91.0075% ( 20) 00:07:23.925 7813.908 - 7864.320: 91.1584% ( 28) 00:07:23.925 7864.320 - 7914.732: 91.2284% ( 13) 00:07:23.925 7914.732 - 7965.145: 91.3147% ( 16) 00:07:23.925 7965.145 - 8015.557: 91.4116% ( 18) 00:07:23.925 8015.557 - 8065.969: 91.4817% ( 13) 00:07:23.925 8065.969 - 8116.382: 91.5948% ( 21) 00:07:23.925 8116.382 - 8166.794: 91.6703% ( 14) 00:07:23.925 8166.794 - 8217.206: 91.7457% ( 14) 00:07:23.925 8217.206 - 8267.618: 91.8373% ( 17) 00:07:23.925 8267.618 - 8318.031: 91.9450% ( 20) 00:07:23.925 8318.031 - 8368.443: 92.0366% ( 17) 00:07:23.925 8368.443 - 8418.855: 92.1336% ( 18) 00:07:23.925 8418.855 - 8469.268: 92.2414% ( 20) 00:07:23.925 8469.268 - 8519.680: 92.3276% ( 16) 00:07:23.925 8519.680 - 8570.092: 92.4138% ( 16) 00:07:23.925 8570.092 - 8620.505: 92.5162% ( 19) 00:07:23.925 8620.505 - 8670.917: 92.6239% ( 20) 00:07:23.925 8670.917 - 8721.329: 92.7856% ( 30) 00:07:23.925 8721.329 - 8771.742: 92.9203% ( 25) 00:07:23.925 8771.742 - 8822.154: 93.0388% ( 22) 00:07:23.925 8822.154 - 8872.566: 93.1897% ( 28) 00:07:23.925 8872.566 - 8922.978: 93.3351% ( 27) 00:07:23.925 8922.978 - 8973.391: 93.4644% ( 24) 00:07:23.925 8973.391 - 9023.803: 93.6369% ( 32) 00:07:23.925 9023.803 - 9074.215: 93.7985% ( 30) 00:07:23.925 9074.215 - 9124.628: 93.9655% ( 31) 00:07:23.925 9124.628 - 9175.040: 94.1002% ( 25) 00:07:23.925 9175.040 - 9225.452: 94.2403% ( 26) 00:07:23.925 9225.452 - 9275.865: 94.3858% ( 27) 00:07:23.925 9275.865 - 9326.277: 94.5420% ( 29) 00:07:23.925 9326.277 - 9376.689: 94.6498% ( 20) 00:07:23.925 9376.689 - 9427.102: 94.7468% ( 18) 00:07:23.925 9427.102 - 9477.514: 94.8815% ( 25) 00:07:23.925 9477.514 - 9527.926: 95.0162% ( 25) 00:07:23.925 9527.926 - 9578.338: 95.1724% ( 29) 00:07:23.925 9578.338 - 9628.751: 95.3233% ( 28) 00:07:23.925 9628.751 - 9679.163: 95.4957% ( 32) 00:07:23.925 9679.163 - 9729.575: 95.6196% ( 23) 00:07:23.925 9729.575 - 9779.988: 95.7543% ( 25) 00:07:23.925 9779.988 - 9830.400: 95.8782% ( 23) 00:07:23.925 9830.400 - 9880.812: 96.0345% ( 29) 00:07:23.925 9880.812 - 9931.225: 96.1422% ( 20) 00:07:23.925 9931.225 - 9981.637: 96.2608% ( 22) 00:07:23.925 9981.637 - 10032.049: 96.3685% ( 20) 00:07:23.925 10032.049 - 10082.462: 96.4871% ( 22) 00:07:23.925 10082.462 - 10132.874: 96.5948% ( 20) 00:07:23.925 10132.874 - 10183.286: 96.7295% ( 25) 00:07:23.925 10183.286 - 10233.698: 96.8427% ( 21) 00:07:23.925 10233.698 - 10284.111: 96.9504% ( 20) 00:07:23.925 10284.111 - 10334.523: 97.0259% ( 14) 00:07:23.925 10334.523 - 10384.935: 97.1282% ( 19) 00:07:23.925 10384.935 - 10435.348: 97.2144% ( 16) 00:07:23.925 10435.348 - 10485.760: 97.3222% ( 20) 00:07:23.925 10485.760 - 10536.172: 97.3922% ( 13) 00:07:23.925 10536.172 - 10586.585: 97.4946% ( 19) 00:07:23.925 10586.585 - 10636.997: 97.5970% ( 19) 00:07:23.925 10636.997 - 10687.409: 97.6670% ( 13) 00:07:23.925 10687.409 - 10737.822: 97.7209% ( 10) 00:07:23.925 10737.822 - 10788.234: 97.7963% ( 14) 00:07:23.925 10788.234 - 10838.646: 97.8664% ( 13) 00:07:23.925 10838.646 - 10889.058: 97.9364% ( 13) 00:07:23.925 10889.058 - 10939.471: 97.9903% ( 10) 00:07:23.925 10939.471 - 10989.883: 98.0280% ( 7) 00:07:23.925 10989.883 - 11040.295: 98.0603% ( 6) 00:07:23.925 11040.295 - 11090.708: 98.1142% ( 10) 00:07:23.925 11090.708 - 11141.120: 98.1573% ( 8) 00:07:23.925 11141.120 - 11191.532: 98.2058% ( 9) 00:07:23.925 11191.532 - 11241.945: 98.2381% ( 6) 00:07:23.925 11241.945 - 11292.357: 98.2705% ( 6) 00:07:23.925 11292.357 - 11342.769: 98.3028% ( 6) 00:07:23.925 11342.769 - 11393.182: 98.3513% ( 9) 00:07:23.925 11393.182 - 11443.594: 98.3944% ( 8) 00:07:23.925 11443.594 - 11494.006: 98.4375% ( 8) 00:07:23.925 11494.006 - 11544.418: 98.4698% ( 6) 00:07:23.925 11544.418 - 11594.831: 98.4968% ( 5) 00:07:23.925 11594.831 - 11645.243: 98.5129% ( 3) 00:07:23.925 11645.243 - 11695.655: 98.5506% ( 7) 00:07:23.925 11695.655 - 11746.068: 98.5722% ( 4) 00:07:23.925 11746.068 - 11796.480: 98.5938% ( 4) 00:07:23.925 11796.480 - 11846.892: 98.6099% ( 3) 00:07:23.925 11846.892 - 11897.305: 98.6369% ( 5) 00:07:23.925 11897.305 - 11947.717: 98.6530% ( 3) 00:07:23.925 11947.717 - 11998.129: 98.6853% ( 6) 00:07:23.925 11998.129 - 12048.542: 98.7015% ( 3) 00:07:23.925 12048.542 - 12098.954: 98.7284% ( 5) 00:07:23.925 12098.954 - 12149.366: 98.7446% ( 3) 00:07:23.925 12149.366 - 12199.778: 98.7500% ( 1) 00:07:23.925 12199.778 - 12250.191: 98.7608% ( 2) 00:07:23.925 12250.191 - 12300.603: 98.7716% ( 2) 00:07:23.925 12300.603 - 12351.015: 98.7877% ( 3) 00:07:23.925 12351.015 - 12401.428: 98.7931% ( 1) 00:07:23.925 12401.428 - 12451.840: 98.8039% ( 2) 00:07:23.925 12451.840 - 12502.252: 98.8147% ( 2) 00:07:23.925 12502.252 - 12552.665: 98.8416% ( 5) 00:07:23.925 12552.665 - 12603.077: 98.8685% ( 5) 00:07:23.925 12603.077 - 12653.489: 98.9009% ( 6) 00:07:23.925 12653.489 - 12703.902: 98.9332% ( 6) 00:07:23.925 12703.902 - 12754.314: 98.9547% ( 4) 00:07:23.925 12754.314 - 12804.726: 98.9817% ( 5) 00:07:23.925 12804.726 - 12855.138: 99.0032% ( 4) 00:07:23.925 12855.138 - 12905.551: 99.0248% ( 4) 00:07:23.926 12905.551 - 13006.375: 99.0841% ( 11) 00:07:23.926 13006.375 - 13107.200: 99.1272% ( 8) 00:07:23.926 13107.200 - 13208.025: 99.1756% ( 9) 00:07:23.926 13208.025 - 13308.849: 99.2188% ( 8) 00:07:23.926 13308.849 - 13409.674: 99.2403% ( 4) 00:07:23.926 13409.674 - 13510.498: 99.2780% ( 7) 00:07:23.926 13510.498 - 13611.323: 99.3103% ( 6) 00:07:23.926 30045.735 - 30247.385: 99.3211% ( 2) 00:07:23.926 30247.385 - 30449.034: 99.3642% ( 8) 00:07:23.926 30449.034 - 30650.683: 99.4073% ( 8) 00:07:23.926 30650.683 - 30852.332: 99.4450% ( 7) 00:07:23.926 30852.332 - 31053.982: 99.4935% ( 9) 00:07:23.926 31053.982 - 31255.631: 99.5312% ( 7) 00:07:23.926 31255.631 - 31457.280: 99.5797% ( 9) 00:07:23.926 31457.280 - 31658.929: 99.6175% ( 7) 00:07:23.926 31658.929 - 31860.578: 99.6552% ( 7) 00:07:23.926 34482.018 - 34683.668: 99.6767% ( 4) 00:07:23.926 34683.668 - 34885.317: 99.7198% ( 8) 00:07:23.926 34885.317 - 35086.966: 99.7575% ( 7) 00:07:23.926 35086.966 - 35288.615: 99.8060% ( 9) 00:07:23.926 35288.615 - 35490.265: 99.8491% ( 8) 00:07:23.926 35490.265 - 35691.914: 99.8922% ( 8) 00:07:23.926 35691.914 - 35893.563: 99.9353% ( 8) 00:07:23.926 35893.563 - 36095.212: 99.9784% ( 8) 00:07:23.926 36095.212 - 36296.862: 100.0000% ( 4) 00:07:23.926 00:07:23.926 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:23.926 ============================================================================== 00:07:23.926 Range in us Cumulative IO count 00:07:23.926 5646.178 - 5671.385: 0.0161% ( 3) 00:07:23.926 5671.385 - 5696.591: 0.0859% ( 13) 00:07:23.926 5696.591 - 5721.797: 0.4027% ( 59) 00:07:23.926 5721.797 - 5747.003: 0.9182% ( 96) 00:07:23.926 5747.003 - 5772.209: 1.6377% ( 134) 00:07:23.926 5772.209 - 5797.415: 2.5988% ( 179) 00:07:23.926 5797.415 - 5822.622: 3.6458% ( 195) 00:07:23.926 5822.622 - 5847.828: 4.8540% ( 225) 00:07:23.926 5847.828 - 5873.034: 6.0621% ( 225) 00:07:23.926 5873.034 - 5898.240: 7.2595% ( 223) 00:07:23.926 5898.240 - 5923.446: 8.5320% ( 237) 00:07:23.926 5923.446 - 5948.652: 9.7079% ( 219) 00:07:23.926 5948.652 - 5973.858: 11.0234% ( 245) 00:07:23.926 5973.858 - 5999.065: 12.3013% ( 238) 00:07:23.926 5999.065 - 6024.271: 13.7027% ( 261) 00:07:23.926 6024.271 - 6049.477: 15.1042% ( 261) 00:07:23.926 6049.477 - 6074.683: 16.5378% ( 267) 00:07:23.926 6074.683 - 6099.889: 18.0144% ( 275) 00:07:23.926 6099.889 - 6125.095: 19.4427% ( 266) 00:07:23.926 6125.095 - 6150.302: 20.8548% ( 263) 00:07:23.926 6150.302 - 6175.508: 22.3797% ( 284) 00:07:23.926 6175.508 - 6200.714: 23.9583% ( 294) 00:07:23.926 6200.714 - 6225.920: 25.5692% ( 300) 00:07:23.926 6225.920 - 6251.126: 27.4753% ( 355) 00:07:23.926 6251.126 - 6276.332: 29.6392% ( 403) 00:07:23.926 6276.332 - 6301.538: 31.8890% ( 419) 00:07:23.926 6301.538 - 6326.745: 34.1656% ( 424) 00:07:23.926 6326.745 - 6351.951: 36.5228% ( 439) 00:07:23.926 6351.951 - 6377.157: 38.9122% ( 445) 00:07:23.926 6377.157 - 6402.363: 41.3445% ( 453) 00:07:23.926 6402.363 - 6427.569: 43.8681% ( 470) 00:07:23.926 6427.569 - 6452.775: 46.2897% ( 451) 00:07:23.926 6452.775 - 6503.188: 51.3746% ( 947) 00:07:23.926 6503.188 - 6553.600: 56.4701% ( 949) 00:07:23.926 6553.600 - 6604.012: 60.9482% ( 834) 00:07:23.926 6604.012 - 6654.425: 64.4759% ( 657) 00:07:23.926 6654.425 - 6704.837: 67.4560% ( 555) 00:07:23.926 6704.837 - 6755.249: 70.1944% ( 510) 00:07:23.926 6755.249 - 6805.662: 72.8683% ( 498) 00:07:23.926 6805.662 - 6856.074: 75.4081% ( 473) 00:07:23.926 6856.074 - 6906.486: 77.8780% ( 460) 00:07:23.926 6906.486 - 6956.898: 80.1976% ( 432) 00:07:23.926 6956.898 - 7007.311: 82.5064% ( 430) 00:07:23.926 7007.311 - 7057.723: 84.6972% ( 408) 00:07:23.926 7057.723 - 7108.135: 86.5765% ( 350) 00:07:23.926 7108.135 - 7158.548: 87.7846% ( 225) 00:07:23.926 7158.548 - 7208.960: 88.4665% ( 127) 00:07:23.926 7208.960 - 7259.372: 88.8316% ( 68) 00:07:23.926 7259.372 - 7309.785: 89.0732% ( 45) 00:07:23.926 7309.785 - 7360.197: 89.2988% ( 42) 00:07:23.926 7360.197 - 7410.609: 89.4974% ( 37) 00:07:23.926 7410.609 - 7461.022: 89.6800% ( 34) 00:07:23.926 7461.022 - 7511.434: 89.8518% ( 32) 00:07:23.926 7511.434 - 7561.846: 90.0236% ( 32) 00:07:23.926 7561.846 - 7612.258: 90.2169% ( 36) 00:07:23.926 7612.258 - 7662.671: 90.3673% ( 28) 00:07:23.926 7662.671 - 7713.083: 90.5713% ( 38) 00:07:23.926 7713.083 - 7763.495: 90.7539% ( 34) 00:07:23.926 7763.495 - 7813.908: 90.9525% ( 37) 00:07:23.926 7813.908 - 7864.320: 91.1244% ( 32) 00:07:23.926 7864.320 - 7914.732: 91.2801% ( 29) 00:07:23.926 7914.732 - 7965.145: 91.4197% ( 26) 00:07:23.926 7965.145 - 8015.557: 91.5485% ( 24) 00:07:23.926 8015.557 - 8065.969: 91.6452% ( 18) 00:07:23.926 8065.969 - 8116.382: 91.7311% ( 16) 00:07:23.926 8116.382 - 8166.794: 91.8224% ( 17) 00:07:23.926 8166.794 - 8217.206: 91.9459% ( 23) 00:07:23.926 8217.206 - 8267.618: 92.0533% ( 20) 00:07:23.926 8267.618 - 8318.031: 92.1821% ( 24) 00:07:23.926 8318.031 - 8368.443: 92.3056% ( 23) 00:07:23.926 8368.443 - 8418.855: 92.4023% ( 18) 00:07:23.926 8418.855 - 8469.268: 92.5097% ( 20) 00:07:23.926 8469.268 - 8519.680: 92.6171% ( 20) 00:07:23.926 8519.680 - 8570.092: 92.7030% ( 16) 00:07:23.926 8570.092 - 8620.505: 92.7835% ( 15) 00:07:23.926 8620.505 - 8670.917: 92.8855% ( 19) 00:07:23.926 8670.917 - 8721.329: 93.0090% ( 23) 00:07:23.926 8721.329 - 8771.742: 93.1325% ( 23) 00:07:23.926 8771.742 - 8822.154: 93.2775% ( 27) 00:07:23.926 8822.154 - 8872.566: 93.4278% ( 28) 00:07:23.926 8872.566 - 8922.978: 93.5835% ( 29) 00:07:23.926 8922.978 - 8973.391: 93.7232% ( 26) 00:07:23.926 8973.391 - 9023.803: 93.8681% ( 27) 00:07:23.926 9023.803 - 9074.215: 94.0185% ( 28) 00:07:23.926 9074.215 - 9124.628: 94.1527% ( 25) 00:07:23.926 9124.628 - 9175.040: 94.2869% ( 25) 00:07:23.926 9175.040 - 9225.452: 94.4265% ( 26) 00:07:23.926 9225.452 - 9275.865: 94.5715% ( 27) 00:07:23.926 9275.865 - 9326.277: 94.7433% ( 32) 00:07:23.926 9326.277 - 9376.689: 94.9098% ( 31) 00:07:23.926 9376.689 - 9427.102: 95.0762% ( 31) 00:07:23.926 9427.102 - 9477.514: 95.2266% ( 28) 00:07:23.926 9477.514 - 9527.926: 95.3662% ( 26) 00:07:23.926 9527.926 - 9578.338: 95.4897% ( 23) 00:07:23.926 9578.338 - 9628.751: 95.5917% ( 19) 00:07:23.926 9628.751 - 9679.163: 95.6884% ( 18) 00:07:23.926 9679.163 - 9729.575: 95.7635% ( 14) 00:07:23.926 9729.575 - 9779.988: 95.8817% ( 22) 00:07:23.926 9779.988 - 9830.400: 95.9890% ( 20) 00:07:23.926 9830.400 - 9880.812: 96.0803% ( 17) 00:07:23.926 9880.812 - 9931.225: 96.1877% ( 20) 00:07:23.926 9931.225 - 9981.637: 96.2897% ( 19) 00:07:23.926 9981.637 - 10032.049: 96.3864% ( 18) 00:07:23.926 10032.049 - 10082.462: 96.5152% ( 24) 00:07:23.926 10082.462 - 10132.874: 96.6173% ( 19) 00:07:23.926 10132.874 - 10183.286: 96.7139% ( 18) 00:07:23.926 10183.286 - 10233.698: 96.8106% ( 18) 00:07:23.926 10233.698 - 10284.111: 96.9072% ( 18) 00:07:23.926 10284.111 - 10334.523: 96.9931% ( 16) 00:07:23.926 10334.523 - 10384.935: 97.0790% ( 16) 00:07:23.926 10384.935 - 10435.348: 97.1649% ( 16) 00:07:23.926 10435.348 - 10485.760: 97.2562% ( 17) 00:07:23.926 10485.760 - 10536.172: 97.3529% ( 18) 00:07:23.926 10536.172 - 10586.585: 97.4388% ( 16) 00:07:23.926 10586.585 - 10636.997: 97.5193% ( 15) 00:07:23.926 10636.997 - 10687.409: 97.5999% ( 15) 00:07:23.926 10687.409 - 10737.822: 97.6697% ( 13) 00:07:23.926 10737.822 - 10788.234: 97.7180% ( 9) 00:07:23.926 10788.234 - 10838.646: 97.7556% ( 7) 00:07:23.926 10838.646 - 10889.058: 97.7985% ( 8) 00:07:23.926 10889.058 - 10939.471: 97.8415% ( 8) 00:07:23.926 10939.471 - 10989.883: 97.8898% ( 9) 00:07:23.926 10989.883 - 11040.295: 97.9381% ( 9) 00:07:23.926 11040.295 - 11090.708: 97.9811% ( 8) 00:07:23.926 11090.708 - 11141.120: 98.0455% ( 12) 00:07:23.926 11141.120 - 11191.532: 98.0939% ( 9) 00:07:23.926 11191.532 - 11241.945: 98.1261% ( 6) 00:07:23.926 11241.945 - 11292.357: 98.1583% ( 6) 00:07:23.926 11292.357 - 11342.769: 98.2120% ( 10) 00:07:23.926 11342.769 - 11393.182: 98.2496% ( 7) 00:07:23.926 11393.182 - 11443.594: 98.3033% ( 10) 00:07:23.926 11443.594 - 11494.006: 98.3677% ( 12) 00:07:23.926 11494.006 - 11544.418: 98.4160% ( 9) 00:07:23.926 11544.418 - 11594.831: 98.4697% ( 10) 00:07:23.926 11594.831 - 11645.243: 98.5127% ( 8) 00:07:23.926 11645.243 - 11695.655: 98.5664% ( 10) 00:07:23.926 11695.655 - 11746.068: 98.6147% ( 9) 00:07:23.926 11746.068 - 11796.480: 98.6469% ( 6) 00:07:23.926 11796.480 - 11846.892: 98.6791% ( 6) 00:07:23.926 11846.892 - 11897.305: 98.7060% ( 5) 00:07:23.926 11897.305 - 11947.717: 98.7382% ( 6) 00:07:23.926 11947.717 - 11998.129: 98.7704% ( 6) 00:07:23.926 11998.129 - 12048.542: 98.7973% ( 5) 00:07:23.926 12048.542 - 12098.954: 98.8295% ( 6) 00:07:23.926 12098.954 - 12149.366: 98.8617% ( 6) 00:07:23.926 12149.366 - 12199.778: 98.8939% ( 6) 00:07:23.926 12199.778 - 12250.191: 98.9154% ( 4) 00:07:23.926 12250.191 - 12300.603: 98.9261% ( 2) 00:07:23.926 12300.603 - 12351.015: 98.9422% ( 3) 00:07:23.926 12351.015 - 12401.428: 98.9530% ( 2) 00:07:23.926 12401.428 - 12451.840: 98.9637% ( 2) 00:07:23.926 12451.840 - 12502.252: 98.9691% ( 1) 00:07:23.926 12603.077 - 12653.489: 98.9798% ( 2) 00:07:23.926 12653.489 - 12703.902: 98.9959% ( 3) 00:07:23.926 12703.902 - 12754.314: 99.0120% ( 3) 00:07:23.926 12754.314 - 12804.726: 99.0389% ( 5) 00:07:23.926 12804.726 - 12855.138: 99.0657% ( 5) 00:07:23.926 12855.138 - 12905.551: 99.0818% ( 3) 00:07:23.927 12905.551 - 13006.375: 99.1140% ( 6) 00:07:23.927 13006.375 - 13107.200: 99.1516% ( 7) 00:07:23.927 13107.200 - 13208.025: 99.1892% ( 7) 00:07:23.927 13208.025 - 13308.849: 99.2268% ( 7) 00:07:23.927 13308.849 - 13409.674: 99.2644% ( 7) 00:07:23.927 13409.674 - 13510.498: 99.3020% ( 7) 00:07:23.927 13510.498 - 13611.323: 99.3127% ( 2) 00:07:23.927 25508.628 - 25609.452: 99.3235% ( 2) 00:07:23.927 25609.452 - 25710.277: 99.3503% ( 5) 00:07:23.927 25710.277 - 25811.102: 99.3718% ( 4) 00:07:23.927 25811.102 - 26012.751: 99.4147% ( 8) 00:07:23.927 26012.751 - 26214.400: 99.4631% ( 9) 00:07:23.927 26214.400 - 26416.049: 99.5060% ( 8) 00:07:23.927 26416.049 - 26617.698: 99.5490% ( 8) 00:07:23.927 26617.698 - 26819.348: 99.5973% ( 9) 00:07:23.927 26819.348 - 27020.997: 99.6402% ( 8) 00:07:23.927 27020.997 - 27222.646: 99.6564% ( 3) 00:07:23.927 29642.437 - 29844.086: 99.6993% ( 8) 00:07:23.927 29844.086 - 30045.735: 99.7423% ( 8) 00:07:23.927 30045.735 - 30247.385: 99.7852% ( 8) 00:07:23.927 30247.385 - 30449.034: 99.8282% ( 8) 00:07:23.927 30449.034 - 30650.683: 99.8711% ( 8) 00:07:23.927 30650.683 - 30852.332: 99.9195% ( 9) 00:07:23.927 30852.332 - 31053.982: 99.9678% ( 9) 00:07:23.927 31053.982 - 31255.631: 100.0000% ( 6) 00:07:23.927 00:07:23.927 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:23.927 ============================================================================== 00:07:23.927 Range in us Cumulative IO count 00:07:23.927 5620.972 - 5646.178: 0.0161% ( 3) 00:07:23.927 5646.178 - 5671.385: 0.0698% ( 10) 00:07:23.927 5671.385 - 5696.591: 0.2201% ( 28) 00:07:23.927 5696.591 - 5721.797: 0.4188% ( 37) 00:07:23.927 5721.797 - 5747.003: 0.9772% ( 104) 00:07:23.927 5747.003 - 5772.209: 1.8417% ( 161) 00:07:23.927 5772.209 - 5797.415: 2.8136% ( 181) 00:07:23.927 5797.415 - 5822.622: 3.8230% ( 188) 00:07:23.927 5822.622 - 5847.828: 5.0150% ( 222) 00:07:23.927 5847.828 - 5873.034: 6.1963% ( 220) 00:07:23.927 5873.034 - 5898.240: 7.4205% ( 228) 00:07:23.927 5898.240 - 5923.446: 8.6555% ( 230) 00:07:23.927 5923.446 - 5948.652: 10.0354% ( 257) 00:07:23.927 5948.652 - 5973.858: 11.3832% ( 251) 00:07:23.927 5973.858 - 5999.065: 12.6611% ( 238) 00:07:23.927 5999.065 - 6024.271: 14.0142% ( 252) 00:07:23.927 6024.271 - 6049.477: 15.3941% ( 257) 00:07:23.927 6049.477 - 6074.683: 16.8546% ( 272) 00:07:23.927 6074.683 - 6099.889: 18.2668% ( 263) 00:07:23.927 6099.889 - 6125.095: 19.6574% ( 259) 00:07:23.927 6125.095 - 6150.302: 21.0964% ( 268) 00:07:23.927 6150.302 - 6175.508: 22.5569% ( 272) 00:07:23.927 6175.508 - 6200.714: 24.1355% ( 294) 00:07:23.927 6200.714 - 6225.920: 25.7893% ( 308) 00:07:23.927 6225.920 - 6251.126: 27.6095% ( 339) 00:07:23.927 6251.126 - 6276.332: 29.6660% ( 383) 00:07:23.927 6276.332 - 6301.538: 31.8084% ( 399) 00:07:23.927 6301.538 - 6326.745: 34.0851% ( 424) 00:07:23.927 6326.745 - 6351.951: 36.4637% ( 443) 00:07:23.927 6351.951 - 6377.157: 38.8316% ( 441) 00:07:23.927 6377.157 - 6402.363: 41.3015% ( 460) 00:07:23.927 6402.363 - 6427.569: 43.7232% ( 451) 00:07:23.927 6427.569 - 6452.775: 46.1931% ( 460) 00:07:23.927 6452.775 - 6503.188: 51.3316% ( 957) 00:07:23.927 6503.188 - 6553.600: 56.1748% ( 902) 00:07:23.927 6553.600 - 6604.012: 60.5241% ( 810) 00:07:23.927 6604.012 - 6654.425: 63.9605% ( 640) 00:07:23.927 6654.425 - 6704.837: 66.8868% ( 545) 00:07:23.927 6704.837 - 6755.249: 69.6789% ( 520) 00:07:23.927 6755.249 - 6805.662: 72.4119% ( 509) 00:07:23.927 6805.662 - 6856.074: 75.0322% ( 488) 00:07:23.927 6856.074 - 6906.486: 77.5827% ( 475) 00:07:23.927 6906.486 - 6956.898: 80.0473% ( 459) 00:07:23.927 6956.898 - 7007.311: 82.4742% ( 452) 00:07:23.927 7007.311 - 7057.723: 84.8046% ( 434) 00:07:23.927 7057.723 - 7108.135: 86.6570% ( 345) 00:07:23.927 7108.135 - 7158.548: 87.8705% ( 226) 00:07:23.927 7158.548 - 7208.960: 88.5954% ( 135) 00:07:23.927 7208.960 - 7259.372: 88.9605% ( 68) 00:07:23.927 7259.372 - 7309.785: 89.2236% ( 49) 00:07:23.927 7309.785 - 7360.197: 89.4276% ( 38) 00:07:23.927 7360.197 - 7410.609: 89.6048% ( 33) 00:07:23.927 7410.609 - 7461.022: 89.7498% ( 27) 00:07:23.927 7461.022 - 7511.434: 89.8787% ( 24) 00:07:23.927 7511.434 - 7561.846: 90.0075% ( 24) 00:07:23.927 7561.846 - 7612.258: 90.1042% ( 18) 00:07:23.927 7612.258 - 7662.671: 90.2223% ( 22) 00:07:23.927 7662.671 - 7713.083: 90.3243% ( 19) 00:07:23.927 7713.083 - 7763.495: 90.4317% ( 20) 00:07:23.927 7763.495 - 7813.908: 90.5391% ( 20) 00:07:23.927 7813.908 - 7864.320: 90.6357% ( 18) 00:07:23.927 7864.320 - 7914.732: 90.7431% ( 20) 00:07:23.927 7914.732 - 7965.145: 90.8666% ( 23) 00:07:23.927 7965.145 - 8015.557: 91.0277% ( 30) 00:07:23.927 8015.557 - 8065.969: 91.1512% ( 23) 00:07:23.927 8065.969 - 8116.382: 91.2532% ( 19) 00:07:23.927 8116.382 - 8166.794: 91.4250% ( 32) 00:07:23.927 8166.794 - 8217.206: 91.5915% ( 31) 00:07:23.927 8217.206 - 8267.618: 91.7365% ( 27) 00:07:23.927 8267.618 - 8318.031: 91.8976% ( 30) 00:07:23.927 8318.031 - 8368.443: 92.0479% ( 28) 00:07:23.927 8368.443 - 8418.855: 92.1875% ( 26) 00:07:23.927 8418.855 - 8469.268: 92.3754% ( 35) 00:07:23.927 8469.268 - 8519.680: 92.5419% ( 31) 00:07:23.927 8519.680 - 8570.092: 92.7191% ( 33) 00:07:23.927 8570.092 - 8620.505: 92.8802% ( 30) 00:07:23.927 8620.505 - 8670.917: 93.0305% ( 28) 00:07:23.927 8670.917 - 8721.329: 93.2023% ( 32) 00:07:23.927 8721.329 - 8771.742: 93.3258% ( 23) 00:07:23.927 8771.742 - 8822.154: 93.4815% ( 29) 00:07:23.927 8822.154 - 8872.566: 93.6265% ( 27) 00:07:23.927 8872.566 - 8922.978: 93.8037% ( 33) 00:07:23.927 8922.978 - 8973.391: 93.9648% ( 30) 00:07:23.927 8973.391 - 9023.803: 94.1312% ( 31) 00:07:23.927 9023.803 - 9074.215: 94.2869% ( 29) 00:07:23.927 9074.215 - 9124.628: 94.4158% ( 24) 00:07:23.927 9124.628 - 9175.040: 94.5554% ( 26) 00:07:23.927 9175.040 - 9225.452: 94.7111% ( 29) 00:07:23.927 9225.452 - 9275.865: 94.8561% ( 27) 00:07:23.927 9275.865 - 9326.277: 94.9957% ( 26) 00:07:23.927 9326.277 - 9376.689: 95.1353% ( 26) 00:07:23.927 9376.689 - 9427.102: 95.2642% ( 24) 00:07:23.927 9427.102 - 9477.514: 95.4145% ( 28) 00:07:23.927 9477.514 - 9527.926: 95.5863% ( 32) 00:07:23.927 9527.926 - 9578.338: 95.7313% ( 27) 00:07:23.927 9578.338 - 9628.751: 95.8817% ( 28) 00:07:23.927 9628.751 - 9679.163: 95.9998% ( 22) 00:07:23.927 9679.163 - 9729.575: 96.1125% ( 21) 00:07:23.927 9729.575 - 9779.988: 96.2038% ( 17) 00:07:23.927 9779.988 - 9830.400: 96.2844% ( 15) 00:07:23.927 9830.400 - 9880.812: 96.3542% ( 13) 00:07:23.927 9880.812 - 9931.225: 96.4401% ( 16) 00:07:23.927 9931.225 - 9981.637: 96.5206% ( 15) 00:07:23.927 9981.637 - 10032.049: 96.6226% ( 19) 00:07:23.927 10032.049 - 10082.462: 96.7085% ( 16) 00:07:23.927 10082.462 - 10132.874: 96.7945% ( 16) 00:07:23.927 10132.874 - 10183.286: 96.8804% ( 16) 00:07:23.927 10183.286 - 10233.698: 96.9502% ( 13) 00:07:23.927 10233.698 - 10284.111: 97.0039% ( 10) 00:07:23.927 10284.111 - 10334.523: 97.0415% ( 7) 00:07:23.927 10334.523 - 10384.935: 97.0683% ( 5) 00:07:23.927 10384.935 - 10435.348: 97.1166% ( 9) 00:07:23.927 10435.348 - 10485.760: 97.1488% ( 6) 00:07:23.927 10485.760 - 10536.172: 97.1972% ( 9) 00:07:23.927 10536.172 - 10586.585: 97.2455% ( 9) 00:07:23.927 10586.585 - 10636.997: 97.2831% ( 7) 00:07:23.927 10636.997 - 10687.409: 97.3099% ( 5) 00:07:23.927 10687.409 - 10737.822: 97.3529% ( 8) 00:07:23.927 10737.822 - 10788.234: 97.3958% ( 8) 00:07:23.927 10788.234 - 10838.646: 97.4549% ( 11) 00:07:23.927 10838.646 - 10889.058: 97.5086% ( 10) 00:07:23.927 10889.058 - 10939.471: 97.5677% ( 11) 00:07:23.927 10939.471 - 10989.883: 97.6375% ( 13) 00:07:23.927 10989.883 - 11040.295: 97.7126% ( 14) 00:07:23.927 11040.295 - 11090.708: 97.7824% ( 13) 00:07:23.927 11090.708 - 11141.120: 97.8630% ( 15) 00:07:23.927 11141.120 - 11191.532: 97.9381% ( 14) 00:07:23.927 11191.532 - 11241.945: 98.0026% ( 12) 00:07:23.927 11241.945 - 11292.357: 98.0616% ( 11) 00:07:23.927 11292.357 - 11342.769: 98.1261% ( 12) 00:07:23.927 11342.769 - 11393.182: 98.1851% ( 11) 00:07:23.927 11393.182 - 11443.594: 98.2335% ( 9) 00:07:23.927 11443.594 - 11494.006: 98.2872% ( 10) 00:07:23.927 11494.006 - 11544.418: 98.3247% ( 7) 00:07:23.927 11544.418 - 11594.831: 98.3623% ( 7) 00:07:23.927 11594.831 - 11645.243: 98.4053% ( 8) 00:07:23.927 11645.243 - 11695.655: 98.4429% ( 7) 00:07:23.927 11695.655 - 11746.068: 98.4858% ( 8) 00:07:23.927 11746.068 - 11796.480: 98.5288% ( 8) 00:07:23.927 11796.480 - 11846.892: 98.5610% ( 6) 00:07:23.927 11846.892 - 11897.305: 98.5986% ( 7) 00:07:23.927 11897.305 - 11947.717: 98.6362% ( 7) 00:07:23.927 11947.717 - 11998.129: 98.6791% ( 8) 00:07:23.927 11998.129 - 12048.542: 98.7167% ( 7) 00:07:23.927 12048.542 - 12098.954: 98.7543% ( 7) 00:07:23.927 12098.954 - 12149.366: 98.7919% ( 7) 00:07:23.927 12149.366 - 12199.778: 98.8295% ( 7) 00:07:23.927 12199.778 - 12250.191: 98.8671% ( 7) 00:07:23.927 12250.191 - 12300.603: 98.9207% ( 10) 00:07:23.927 12300.603 - 12351.015: 98.9637% ( 8) 00:07:23.928 12351.015 - 12401.428: 99.0013% ( 7) 00:07:23.928 12401.428 - 12451.840: 99.0389% ( 7) 00:07:23.928 12451.840 - 12502.252: 99.0711% ( 6) 00:07:23.928 12502.252 - 12552.665: 99.1140% ( 8) 00:07:23.928 12552.665 - 12603.077: 99.1409% ( 5) 00:07:23.928 12603.077 - 12653.489: 99.1731% ( 6) 00:07:23.928 12653.489 - 12703.902: 99.2000% ( 5) 00:07:23.928 12703.902 - 12754.314: 99.2214% ( 4) 00:07:23.928 12754.314 - 12804.726: 99.2375% ( 3) 00:07:23.928 12804.726 - 12855.138: 99.2590% ( 4) 00:07:23.928 12855.138 - 12905.551: 99.2698% ( 2) 00:07:23.928 12905.551 - 13006.375: 99.3127% ( 8) 00:07:23.928 23693.785 - 23794.609: 99.3235% ( 2) 00:07:23.928 23794.609 - 23895.434: 99.3449% ( 4) 00:07:23.928 23895.434 - 23996.258: 99.3664% ( 4) 00:07:23.928 23996.258 - 24097.083: 99.3879% ( 4) 00:07:23.928 24097.083 - 24197.908: 99.4094% ( 4) 00:07:23.928 24197.908 - 24298.732: 99.4362% ( 5) 00:07:23.928 24298.732 - 24399.557: 99.4577% ( 4) 00:07:23.928 24399.557 - 24500.382: 99.4792% ( 4) 00:07:23.928 24500.382 - 24601.206: 99.5006% ( 4) 00:07:23.928 24601.206 - 24702.031: 99.5221% ( 4) 00:07:23.928 24702.031 - 24802.855: 99.5436% ( 4) 00:07:23.928 24802.855 - 24903.680: 99.5651% ( 4) 00:07:23.928 24903.680 - 25004.505: 99.5866% ( 4) 00:07:23.928 25004.505 - 25105.329: 99.6080% ( 4) 00:07:23.928 25105.329 - 25206.154: 99.6295% ( 4) 00:07:23.928 25206.154 - 25306.978: 99.6564% ( 5) 00:07:23.928 28230.892 - 28432.542: 99.6886% ( 6) 00:07:23.928 28432.542 - 28634.191: 99.7369% ( 9) 00:07:23.928 28634.191 - 28835.840: 99.7745% ( 7) 00:07:23.928 28835.840 - 29037.489: 99.8174% ( 8) 00:07:23.928 29037.489 - 29239.138: 99.8604% ( 8) 00:07:23.928 29239.138 - 29440.788: 99.9034% ( 8) 00:07:23.928 29440.788 - 29642.437: 99.9517% ( 9) 00:07:23.928 29642.437 - 29844.086: 99.9946% ( 8) 00:07:23.928 29844.086 - 30045.735: 100.0000% ( 1) 00:07:23.928 00:07:23.928 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:23.928 ============================================================================== 00:07:23.928 Range in us Cumulative IO count 00:07:23.928 5646.178 - 5671.385: 0.0054% ( 1) 00:07:23.928 5671.385 - 5696.591: 0.0966% ( 17) 00:07:23.928 5696.591 - 5721.797: 0.4510% ( 66) 00:07:23.928 5721.797 - 5747.003: 1.0309% ( 108) 00:07:23.928 5747.003 - 5772.209: 1.6914% ( 123) 00:07:23.928 5772.209 - 5797.415: 2.4860% ( 148) 00:07:23.928 5797.415 - 5822.622: 3.5814% ( 204) 00:07:23.928 5822.622 - 5847.828: 4.7949% ( 226) 00:07:23.928 5847.828 - 5873.034: 5.9439% ( 214) 00:07:23.928 5873.034 - 5898.240: 7.1735% ( 229) 00:07:23.928 5898.240 - 5923.446: 8.3817% ( 225) 00:07:23.928 5923.446 - 5948.652: 9.6435% ( 235) 00:07:23.928 5948.652 - 5973.858: 10.9375% ( 241) 00:07:23.928 5973.858 - 5999.065: 12.2745% ( 249) 00:07:23.928 5999.065 - 6024.271: 13.5846% ( 244) 00:07:23.928 6024.271 - 6049.477: 14.9753% ( 259) 00:07:23.928 6049.477 - 6074.683: 16.3875% ( 263) 00:07:23.928 6074.683 - 6099.889: 17.8479% ( 272) 00:07:23.928 6099.889 - 6125.095: 19.2601% ( 263) 00:07:23.928 6125.095 - 6150.302: 20.7689% ( 281) 00:07:23.928 6150.302 - 6175.508: 22.2401% ( 274) 00:07:23.928 6175.508 - 6200.714: 23.8295% ( 296) 00:07:23.928 6200.714 - 6225.920: 25.4349% ( 299) 00:07:23.928 6225.920 - 6251.126: 27.2766% ( 343) 00:07:23.928 6251.126 - 6276.332: 29.2633% ( 370) 00:07:23.928 6276.332 - 6301.538: 31.5077% ( 418) 00:07:23.928 6301.538 - 6326.745: 33.8918% ( 444) 00:07:23.928 6326.745 - 6351.951: 36.3456% ( 457) 00:07:23.928 6351.951 - 6377.157: 38.8370% ( 464) 00:07:23.928 6377.157 - 6402.363: 41.2801% ( 455) 00:07:23.928 6402.363 - 6427.569: 43.7339% ( 457) 00:07:23.928 6427.569 - 6452.775: 46.4347% ( 503) 00:07:23.928 6452.775 - 6503.188: 51.5249% ( 948) 00:07:23.928 6503.188 - 6553.600: 56.5185% ( 930) 00:07:23.928 6553.600 - 6604.012: 60.9429% ( 824) 00:07:23.928 6604.012 - 6654.425: 64.4921% ( 661) 00:07:23.928 6654.425 - 6704.837: 67.4399% ( 549) 00:07:23.928 6704.837 - 6755.249: 70.2266% ( 519) 00:07:23.928 6755.249 - 6805.662: 72.9381% ( 505) 00:07:23.928 6805.662 - 6856.074: 75.5745% ( 491) 00:07:23.928 6856.074 - 6906.486: 78.0982% ( 470) 00:07:23.928 6906.486 - 6956.898: 80.5466% ( 456) 00:07:23.928 6956.898 - 7007.311: 82.8447% ( 428) 00:07:23.928 7007.311 - 7057.723: 85.0569% ( 412) 00:07:23.928 7057.723 - 7108.135: 86.9255% ( 348) 00:07:23.928 7108.135 - 7158.548: 88.0584% ( 211) 00:07:23.928 7158.548 - 7208.960: 88.7189% ( 123) 00:07:23.928 7208.960 - 7259.372: 89.0518% ( 62) 00:07:23.928 7259.372 - 7309.785: 89.2826% ( 43) 00:07:23.928 7309.785 - 7360.197: 89.4598% ( 33) 00:07:23.928 7360.197 - 7410.609: 89.5887% ( 24) 00:07:23.928 7410.609 - 7461.022: 89.7229% ( 25) 00:07:23.928 7461.022 - 7511.434: 89.8948% ( 32) 00:07:23.928 7511.434 - 7561.846: 90.0827% ( 35) 00:07:23.928 7561.846 - 7612.258: 90.2116% ( 24) 00:07:23.928 7612.258 - 7662.671: 90.3512% ( 26) 00:07:23.928 7662.671 - 7713.083: 90.4800% ( 24) 00:07:23.928 7713.083 - 7763.495: 90.6089% ( 24) 00:07:23.928 7763.495 - 7813.908: 90.7055% ( 18) 00:07:23.928 7813.908 - 7864.320: 90.7915% ( 16) 00:07:23.928 7864.320 - 7914.732: 90.8774% ( 16) 00:07:23.928 7914.732 - 7965.145: 90.9472% ( 13) 00:07:23.928 7965.145 - 8015.557: 91.0223% ( 14) 00:07:23.928 8015.557 - 8065.969: 91.0868% ( 12) 00:07:23.928 8065.969 - 8116.382: 91.1405% ( 10) 00:07:23.928 8116.382 - 8166.794: 91.1942% ( 10) 00:07:23.928 8166.794 - 8217.206: 91.2532% ( 11) 00:07:23.928 8217.206 - 8267.618: 91.3123% ( 11) 00:07:23.928 8267.618 - 8318.031: 91.4197% ( 20) 00:07:23.928 8318.031 - 8368.443: 91.5163% ( 18) 00:07:23.928 8368.443 - 8418.855: 91.6076% ( 17) 00:07:23.928 8418.855 - 8469.268: 91.7204% ( 21) 00:07:23.928 8469.268 - 8519.680: 91.8707% ( 28) 00:07:23.928 8519.680 - 8570.092: 92.0586% ( 35) 00:07:23.928 8570.092 - 8620.505: 92.2251% ( 31) 00:07:23.928 8620.505 - 8670.917: 92.3862% ( 30) 00:07:23.928 8670.917 - 8721.329: 92.5473% ( 30) 00:07:23.928 8721.329 - 8771.742: 92.7083% ( 30) 00:07:23.928 8771.742 - 8822.154: 92.8802% ( 32) 00:07:23.928 8822.154 - 8872.566: 93.0735% ( 36) 00:07:23.928 8872.566 - 8922.978: 93.2292% ( 29) 00:07:23.928 8922.978 - 8973.391: 93.4278% ( 37) 00:07:23.928 8973.391 - 9023.803: 93.6372% ( 39) 00:07:23.928 9023.803 - 9074.215: 93.9003% ( 49) 00:07:23.928 9074.215 - 9124.628: 94.1205% ( 41) 00:07:23.928 9124.628 - 9175.040: 94.3192% ( 37) 00:07:23.928 9175.040 - 9225.452: 94.5178% ( 37) 00:07:23.928 9225.452 - 9275.865: 94.7433% ( 42) 00:07:23.928 9275.865 - 9326.277: 94.9313% ( 35) 00:07:23.928 9326.277 - 9376.689: 95.1031% ( 32) 00:07:23.928 9376.689 - 9427.102: 95.2642% ( 30) 00:07:23.928 9427.102 - 9477.514: 95.4199% ( 29) 00:07:23.928 9477.514 - 9527.926: 95.5702% ( 28) 00:07:23.928 9527.926 - 9578.338: 95.7582% ( 35) 00:07:23.928 9578.338 - 9628.751: 95.9246% ( 31) 00:07:23.928 9628.751 - 9679.163: 96.0535% ( 24) 00:07:23.928 9679.163 - 9729.575: 96.1823% ( 24) 00:07:23.928 9729.575 - 9779.988: 96.3005% ( 22) 00:07:23.928 9779.988 - 9830.400: 96.3918% ( 17) 00:07:23.928 9830.400 - 9880.812: 96.4830% ( 17) 00:07:23.928 9880.812 - 9931.225: 96.5636% ( 15) 00:07:23.928 9931.225 - 9981.637: 96.6387% ( 14) 00:07:23.928 9981.637 - 10032.049: 96.7139% ( 14) 00:07:23.928 10032.049 - 10082.462: 96.7945% ( 15) 00:07:23.928 10082.462 - 10132.874: 96.8696% ( 14) 00:07:23.928 10132.874 - 10183.286: 96.9233% ( 10) 00:07:23.928 10183.286 - 10233.698: 96.9770% ( 10) 00:07:23.928 10233.698 - 10284.111: 97.0200% ( 8) 00:07:23.928 10284.111 - 10334.523: 97.0522% ( 6) 00:07:23.928 10334.523 - 10384.935: 97.0683% ( 3) 00:07:23.928 10384.935 - 10435.348: 97.0790% ( 2) 00:07:23.928 10435.348 - 10485.760: 97.1059% ( 5) 00:07:23.928 10485.760 - 10536.172: 97.1274% ( 4) 00:07:23.928 10536.172 - 10586.585: 97.1811% ( 10) 00:07:23.928 10586.585 - 10636.997: 97.2509% ( 13) 00:07:23.928 10636.997 - 10687.409: 97.3314% ( 15) 00:07:23.928 10687.409 - 10737.822: 97.4119% ( 15) 00:07:23.928 10737.822 - 10788.234: 97.4817% ( 13) 00:07:23.928 10788.234 - 10838.646: 97.5462% ( 12) 00:07:23.928 10838.646 - 10889.058: 97.6213% ( 14) 00:07:23.928 10889.058 - 10939.471: 97.7019% ( 15) 00:07:23.928 10939.471 - 10989.883: 97.7610% ( 11) 00:07:23.928 10989.883 - 11040.295: 97.8308% ( 13) 00:07:23.928 11040.295 - 11090.708: 97.9006% ( 13) 00:07:23.928 11090.708 - 11141.120: 97.9757% ( 14) 00:07:23.928 11141.120 - 11191.532: 98.0455% ( 13) 00:07:23.928 11191.532 - 11241.945: 98.1100% ( 12) 00:07:23.928 11241.945 - 11292.357: 98.1851% ( 14) 00:07:23.928 11292.357 - 11342.769: 98.2603% ( 14) 00:07:23.928 11342.769 - 11393.182: 98.3355% ( 14) 00:07:23.928 11393.182 - 11443.594: 98.4053% ( 13) 00:07:23.928 11443.594 - 11494.006: 98.4375% ( 6) 00:07:23.928 11494.006 - 11544.418: 98.4643% ( 5) 00:07:23.928 11544.418 - 11594.831: 98.4858% ( 4) 00:07:23.928 11594.831 - 11645.243: 98.5127% ( 5) 00:07:23.928 11645.243 - 11695.655: 98.5395% ( 5) 00:07:23.928 11695.655 - 11746.068: 98.5717% ( 6) 00:07:23.928 11746.068 - 11796.480: 98.6147% ( 8) 00:07:23.928 11796.480 - 11846.892: 98.6469% ( 6) 00:07:23.928 11846.892 - 11897.305: 98.6738% ( 5) 00:07:23.928 11897.305 - 11947.717: 98.7006% ( 5) 00:07:23.928 11947.717 - 11998.129: 98.7167% ( 3) 00:07:23.928 11998.129 - 12048.542: 98.7489% ( 6) 00:07:23.928 12048.542 - 12098.954: 98.7865% ( 7) 00:07:23.928 12098.954 - 12149.366: 98.8241% ( 7) 00:07:23.928 12149.366 - 12199.778: 98.8617% ( 7) 00:07:23.928 12199.778 - 12250.191: 98.8939% ( 6) 00:07:23.928 12250.191 - 12300.603: 98.9315% ( 7) 00:07:23.928 12300.603 - 12351.015: 98.9637% ( 6) 00:07:23.929 12351.015 - 12401.428: 99.0013% ( 7) 00:07:23.929 12401.428 - 12451.840: 99.0335% ( 6) 00:07:23.929 12451.840 - 12502.252: 99.0711% ( 7) 00:07:23.929 12502.252 - 12552.665: 99.1087% ( 7) 00:07:23.929 12552.665 - 12603.077: 99.1409% ( 6) 00:07:23.929 12603.077 - 12653.489: 99.1838% ( 8) 00:07:23.929 12653.489 - 12703.902: 99.2161% ( 6) 00:07:23.929 12703.902 - 12754.314: 99.2375% ( 4) 00:07:23.929 12754.314 - 12804.726: 99.2590% ( 4) 00:07:23.929 12804.726 - 12855.138: 99.2751% ( 3) 00:07:23.929 12855.138 - 12905.551: 99.2966% ( 4) 00:07:23.929 12905.551 - 13006.375: 99.3127% ( 3) 00:07:23.929 21979.766 - 22080.591: 99.3181% ( 1) 00:07:23.929 22080.591 - 22181.415: 99.3449% ( 5) 00:07:23.929 22181.415 - 22282.240: 99.3664% ( 4) 00:07:23.929 22282.240 - 22383.065: 99.3825% ( 3) 00:07:23.929 22383.065 - 22483.889: 99.4094% ( 5) 00:07:23.929 22483.889 - 22584.714: 99.4308% ( 4) 00:07:23.929 22584.714 - 22685.538: 99.4523% ( 4) 00:07:23.929 22685.538 - 22786.363: 99.4738% ( 4) 00:07:23.929 22786.363 - 22887.188: 99.4953% ( 4) 00:07:23.929 22887.188 - 22988.012: 99.5221% ( 5) 00:07:23.929 22988.012 - 23088.837: 99.5436% ( 4) 00:07:23.929 23088.837 - 23189.662: 99.5651% ( 4) 00:07:23.929 23189.662 - 23290.486: 99.5866% ( 4) 00:07:23.929 23290.486 - 23391.311: 99.6080% ( 4) 00:07:23.929 23391.311 - 23492.135: 99.6349% ( 5) 00:07:23.929 23492.135 - 23592.960: 99.6564% ( 4) 00:07:23.929 26416.049 - 26617.698: 99.6832% ( 5) 00:07:23.929 26617.698 - 26819.348: 99.7262% ( 8) 00:07:23.929 26819.348 - 27020.997: 99.7691% ( 8) 00:07:23.929 27020.997 - 27222.646: 99.8174% ( 9) 00:07:23.929 27222.646 - 27424.295: 99.8604% ( 8) 00:07:23.929 27424.295 - 27625.945: 99.9087% ( 9) 00:07:23.929 27625.945 - 27827.594: 99.9517% ( 8) 00:07:23.929 27827.594 - 28029.243: 99.9946% ( 8) 00:07:23.929 28029.243 - 28230.892: 100.0000% ( 1) 00:07:23.929 00:07:23.929 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:23.929 ============================================================================== 00:07:23.929 Range in us Cumulative IO count 00:07:23.929 5620.972 - 5646.178: 0.0054% ( 1) 00:07:23.929 5646.178 - 5671.385: 0.0268% ( 4) 00:07:23.929 5671.385 - 5696.591: 0.1020% ( 14) 00:07:23.929 5696.591 - 5721.797: 0.3866% ( 53) 00:07:23.929 5721.797 - 5747.003: 0.9128% ( 98) 00:07:23.929 5747.003 - 5772.209: 1.6538% ( 138) 00:07:23.929 5772.209 - 5797.415: 2.6095% ( 178) 00:07:23.929 5797.415 - 5822.622: 3.5921% ( 183) 00:07:23.929 5822.622 - 5847.828: 4.7949% ( 224) 00:07:23.929 5847.828 - 5873.034: 6.0782% ( 239) 00:07:23.929 5873.034 - 5898.240: 7.2219% ( 213) 00:07:23.929 5898.240 - 5923.446: 8.4622% ( 231) 00:07:23.929 5923.446 - 5948.652: 9.6649% ( 224) 00:07:23.929 5948.652 - 5973.858: 11.0127% ( 251) 00:07:23.929 5973.858 - 5999.065: 12.3228% ( 244) 00:07:23.929 5999.065 - 6024.271: 13.7242% ( 261) 00:07:23.929 6024.271 - 6049.477: 15.0934% ( 255) 00:07:23.929 6049.477 - 6074.683: 16.5217% ( 266) 00:07:23.929 6074.683 - 6099.889: 17.9822% ( 272) 00:07:23.929 6099.889 - 6125.095: 19.4212% ( 268) 00:07:23.929 6125.095 - 6150.302: 20.9085% ( 277) 00:07:23.929 6150.302 - 6175.508: 22.3958% ( 277) 00:07:23.929 6175.508 - 6200.714: 23.9100% ( 282) 00:07:23.929 6200.714 - 6225.920: 25.4832% ( 293) 00:07:23.929 6225.920 - 6251.126: 27.2390% ( 327) 00:07:23.929 6251.126 - 6276.332: 29.3116% ( 386) 00:07:23.929 6276.332 - 6301.538: 31.5292% ( 413) 00:07:23.929 6301.538 - 6326.745: 34.0421% ( 468) 00:07:23.929 6326.745 - 6351.951: 36.4369% ( 446) 00:07:23.929 6351.951 - 6377.157: 38.8853% ( 456) 00:07:23.929 6377.157 - 6402.363: 41.3230% ( 454) 00:07:23.929 6402.363 - 6427.569: 43.8789% ( 476) 00:07:23.929 6427.569 - 6452.775: 46.4508% ( 479) 00:07:23.929 6452.775 - 6503.188: 51.4927% ( 939) 00:07:23.929 6503.188 - 6553.600: 56.3628% ( 907) 00:07:23.929 6553.600 - 6604.012: 60.7227% ( 812) 00:07:23.929 6604.012 - 6654.425: 64.3524% ( 676) 00:07:23.929 6654.425 - 6704.837: 67.3217% ( 553) 00:07:23.929 6704.837 - 6755.249: 70.0816% ( 514) 00:07:23.929 6755.249 - 6805.662: 72.7932% ( 505) 00:07:23.929 6805.662 - 6856.074: 75.4188% ( 489) 00:07:23.929 6856.074 - 6906.486: 77.8780% ( 458) 00:07:23.929 6906.486 - 6956.898: 80.2996% ( 451) 00:07:23.929 6956.898 - 7007.311: 82.6353% ( 435) 00:07:23.929 7007.311 - 7057.723: 84.8099% ( 405) 00:07:23.929 7057.723 - 7108.135: 86.7537% ( 362) 00:07:23.929 7108.135 - 7158.548: 88.0155% ( 235) 00:07:23.929 7158.548 - 7208.960: 88.6920% ( 126) 00:07:23.929 7208.960 - 7259.372: 89.0571% ( 68) 00:07:23.929 7259.372 - 7309.785: 89.2880% ( 43) 00:07:23.929 7309.785 - 7360.197: 89.4974% ( 39) 00:07:23.929 7360.197 - 7410.609: 89.6585% ( 30) 00:07:23.929 7410.609 - 7461.022: 89.8088% ( 28) 00:07:23.929 7461.022 - 7511.434: 89.9699% ( 30) 00:07:23.929 7511.434 - 7561.846: 90.0881% ( 22) 00:07:23.929 7561.846 - 7612.258: 90.2223% ( 25) 00:07:23.929 7612.258 - 7662.671: 90.3351% ( 21) 00:07:23.929 7662.671 - 7713.083: 90.4317% ( 18) 00:07:23.929 7713.083 - 7763.495: 90.5284% ( 18) 00:07:23.929 7763.495 - 7813.908: 90.6196% ( 17) 00:07:23.929 7813.908 - 7864.320: 90.7002% ( 15) 00:07:23.929 7864.320 - 7914.732: 90.7861% ( 16) 00:07:23.929 7914.732 - 7965.145: 90.8613% ( 14) 00:07:23.929 7965.145 - 8015.557: 90.9418% ( 15) 00:07:23.929 8015.557 - 8065.969: 91.0546% ( 21) 00:07:23.929 8065.969 - 8116.382: 91.1512% ( 18) 00:07:23.929 8116.382 - 8166.794: 91.2479% ( 18) 00:07:23.929 8166.794 - 8217.206: 91.3391% ( 17) 00:07:23.929 8217.206 - 8267.618: 91.4089% ( 13) 00:07:23.929 8267.618 - 8318.031: 91.4948% ( 16) 00:07:23.929 8318.031 - 8368.443: 91.5700% ( 14) 00:07:23.929 8368.443 - 8418.855: 91.6398% ( 13) 00:07:23.929 8418.855 - 8469.268: 91.7204% ( 15) 00:07:23.929 8469.268 - 8519.680: 91.7955% ( 14) 00:07:23.929 8519.680 - 8570.092: 91.8653% ( 13) 00:07:23.929 8570.092 - 8620.505: 91.9512% ( 16) 00:07:23.929 8620.505 - 8670.917: 92.0586% ( 20) 00:07:23.929 8670.917 - 8721.329: 92.2197% ( 30) 00:07:23.929 8721.329 - 8771.742: 92.3432% ( 23) 00:07:23.929 8771.742 - 8822.154: 92.4936% ( 28) 00:07:23.929 8822.154 - 8872.566: 92.6815% ( 35) 00:07:23.929 8872.566 - 8922.978: 92.8909% ( 39) 00:07:23.929 8922.978 - 8973.391: 93.1057% ( 40) 00:07:23.929 8973.391 - 9023.803: 93.3473% ( 45) 00:07:23.929 9023.803 - 9074.215: 93.5674% ( 41) 00:07:23.929 9074.215 - 9124.628: 93.7930% ( 42) 00:07:23.929 9124.628 - 9175.040: 94.0077% ( 40) 00:07:23.929 9175.040 - 9225.452: 94.2279% ( 41) 00:07:23.929 9225.452 - 9275.865: 94.4480% ( 41) 00:07:23.929 9275.865 - 9326.277: 94.6735% ( 42) 00:07:23.929 9326.277 - 9376.689: 94.8937% ( 41) 00:07:23.929 9376.689 - 9427.102: 95.0870% ( 36) 00:07:23.929 9427.102 - 9477.514: 95.2749% ( 35) 00:07:23.929 9477.514 - 9527.926: 95.4575% ( 34) 00:07:23.929 9527.926 - 9578.338: 95.6561% ( 37) 00:07:23.929 9578.338 - 9628.751: 95.8172% ( 30) 00:07:23.929 9628.751 - 9679.163: 95.9515% ( 25) 00:07:23.929 9679.163 - 9729.575: 96.1018% ( 28) 00:07:23.929 9729.575 - 9779.988: 96.2307% ( 24) 00:07:23.929 9779.988 - 9830.400: 96.3327% ( 19) 00:07:23.929 9830.400 - 9880.812: 96.3971% ( 12) 00:07:23.929 9880.812 - 9931.225: 96.4830% ( 16) 00:07:23.929 9931.225 - 9981.637: 96.5689% ( 16) 00:07:23.929 9981.637 - 10032.049: 96.6656% ( 18) 00:07:23.929 10032.049 - 10082.462: 96.7515% ( 16) 00:07:23.929 10082.462 - 10132.874: 96.8320% ( 15) 00:07:23.929 10132.874 - 10183.286: 96.8965% ( 12) 00:07:23.929 10183.286 - 10233.698: 96.9555% ( 11) 00:07:23.929 10233.698 - 10284.111: 96.9985% ( 8) 00:07:23.929 10284.111 - 10334.523: 97.0522% ( 10) 00:07:23.929 10334.523 - 10384.935: 97.1220% ( 13) 00:07:23.929 10384.935 - 10435.348: 97.1811% ( 11) 00:07:23.929 10435.348 - 10485.760: 97.2455% ( 12) 00:07:23.929 10485.760 - 10536.172: 97.3153% ( 13) 00:07:23.929 10536.172 - 10586.585: 97.3744% ( 11) 00:07:23.929 10586.585 - 10636.997: 97.4334% ( 11) 00:07:23.929 10636.997 - 10687.409: 97.4925% ( 11) 00:07:23.929 10687.409 - 10737.822: 97.5677% ( 14) 00:07:23.929 10737.822 - 10788.234: 97.6428% ( 14) 00:07:23.930 10788.234 - 10838.646: 97.7019% ( 11) 00:07:23.930 10838.646 - 10889.058: 97.7663% ( 12) 00:07:23.930 10889.058 - 10939.471: 97.8254% ( 11) 00:07:23.930 10939.471 - 10989.883: 97.8898% ( 12) 00:07:23.930 10989.883 - 11040.295: 97.9328% ( 8) 00:07:23.930 11040.295 - 11090.708: 97.9811% ( 9) 00:07:23.930 11090.708 - 11141.120: 98.0616% ( 15) 00:07:23.930 11141.120 - 11191.532: 98.1422% ( 15) 00:07:23.930 11191.532 - 11241.945: 98.2281% ( 16) 00:07:23.930 11241.945 - 11292.357: 98.2818% ( 10) 00:07:23.930 11292.357 - 11342.769: 98.3247% ( 8) 00:07:23.930 11342.769 - 11393.182: 98.3784% ( 10) 00:07:23.930 11393.182 - 11443.594: 98.4214% ( 8) 00:07:23.930 11443.594 - 11494.006: 98.4590% ( 7) 00:07:23.930 11494.006 - 11544.418: 98.5073% ( 9) 00:07:23.930 11544.418 - 11594.831: 98.5503% ( 8) 00:07:23.930 11594.831 - 11645.243: 98.5986% ( 9) 00:07:23.930 11645.243 - 11695.655: 98.6415% ( 8) 00:07:23.930 11695.655 - 11746.068: 98.6630% ( 4) 00:07:23.930 11746.068 - 11796.480: 98.6899% ( 5) 00:07:23.930 11796.480 - 11846.892: 98.7221% ( 6) 00:07:23.930 11846.892 - 11897.305: 98.7543% ( 6) 00:07:23.930 11897.305 - 11947.717: 98.7811% ( 5) 00:07:23.930 11947.717 - 11998.129: 98.8134% ( 6) 00:07:23.930 11998.129 - 12048.542: 98.8456% ( 6) 00:07:23.930 12048.542 - 12098.954: 98.8671% ( 4) 00:07:23.930 12098.954 - 12149.366: 98.8832% ( 3) 00:07:23.930 12149.366 - 12199.778: 98.9100% ( 5) 00:07:23.930 12199.778 - 12250.191: 98.9422% ( 6) 00:07:23.930 12250.191 - 12300.603: 98.9744% ( 6) 00:07:23.930 12300.603 - 12351.015: 99.0067% ( 6) 00:07:23.930 12351.015 - 12401.428: 99.0389% ( 6) 00:07:23.930 12401.428 - 12451.840: 99.0711% ( 6) 00:07:23.930 12451.840 - 12502.252: 99.1033% ( 6) 00:07:23.930 12502.252 - 12552.665: 99.1194% ( 3) 00:07:23.930 12552.665 - 12603.077: 99.1409% ( 4) 00:07:23.930 12603.077 - 12653.489: 99.1570% ( 3) 00:07:23.930 12653.489 - 12703.902: 99.1731% ( 3) 00:07:23.930 12703.902 - 12754.314: 99.1946% ( 4) 00:07:23.930 12754.314 - 12804.726: 99.2053% ( 2) 00:07:23.930 12804.726 - 12855.138: 99.2214% ( 3) 00:07:23.930 12855.138 - 12905.551: 99.2375% ( 3) 00:07:23.930 12905.551 - 13006.375: 99.2751% ( 7) 00:07:23.930 13006.375 - 13107.200: 99.3127% ( 7) 00:07:23.930 20265.748 - 20366.572: 99.3288% ( 3) 00:07:23.930 20366.572 - 20467.397: 99.3503% ( 4) 00:07:23.930 20467.397 - 20568.222: 99.3718% ( 4) 00:07:23.930 20568.222 - 20669.046: 99.3986% ( 5) 00:07:23.930 20669.046 - 20769.871: 99.4201% ( 4) 00:07:23.930 20769.871 - 20870.695: 99.4416% ( 4) 00:07:23.930 20870.695 - 20971.520: 99.4631% ( 4) 00:07:23.930 20971.520 - 21072.345: 99.4845% ( 4) 00:07:23.930 21072.345 - 21173.169: 99.5060% ( 4) 00:07:23.930 21173.169 - 21273.994: 99.5275% ( 4) 00:07:23.930 21273.994 - 21374.818: 99.5490% ( 4) 00:07:23.930 21374.818 - 21475.643: 99.5704% ( 4) 00:07:23.930 21475.643 - 21576.468: 99.5973% ( 5) 00:07:23.930 21576.468 - 21677.292: 99.6188% ( 4) 00:07:23.930 21677.292 - 21778.117: 99.6402% ( 4) 00:07:23.930 21778.117 - 21878.942: 99.6564% ( 3) 00:07:23.930 24601.206 - 24702.031: 99.6617% ( 1) 00:07:23.930 24702.031 - 24802.855: 99.6832% ( 4) 00:07:23.930 24802.855 - 24903.680: 99.7047% ( 4) 00:07:23.930 24903.680 - 25004.505: 99.7262% ( 4) 00:07:23.930 25004.505 - 25105.329: 99.7476% ( 4) 00:07:23.930 25105.329 - 25206.154: 99.7691% ( 4) 00:07:23.930 25206.154 - 25306.978: 99.7906% ( 4) 00:07:23.930 25306.978 - 25407.803: 99.8121% ( 4) 00:07:23.930 25407.803 - 25508.628: 99.8335% ( 4) 00:07:23.930 25508.628 - 25609.452: 99.8550% ( 4) 00:07:23.930 25609.452 - 25710.277: 99.8819% ( 5) 00:07:23.930 25710.277 - 25811.102: 99.9034% ( 4) 00:07:23.930 25811.102 - 26012.751: 99.9463% ( 8) 00:07:23.930 26012.751 - 26214.400: 99.9893% ( 8) 00:07:23.930 26214.400 - 26416.049: 100.0000% ( 2) 00:07:23.930 00:07:23.930 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:23.930 ============================================================================== 00:07:23.930 Range in us Cumulative IO count 00:07:23.930 5620.972 - 5646.178: 0.0054% ( 1) 00:07:23.930 5646.178 - 5671.385: 0.0430% ( 7) 00:07:23.930 5671.385 - 5696.591: 0.1450% ( 19) 00:07:23.930 5696.591 - 5721.797: 0.4403% ( 55) 00:07:23.930 5721.797 - 5747.003: 0.9289% ( 91) 00:07:23.930 5747.003 - 5772.209: 1.6430% ( 133) 00:07:23.930 5772.209 - 5797.415: 2.5827% ( 175) 00:07:23.930 5797.415 - 5822.622: 3.7317% ( 214) 00:07:23.930 5822.622 - 5847.828: 4.9506% ( 227) 00:07:23.930 5847.828 - 5873.034: 6.0513% ( 205) 00:07:23.930 5873.034 - 5898.240: 7.2487% ( 223) 00:07:23.930 5898.240 - 5923.446: 8.4192% ( 218) 00:07:23.930 5923.446 - 5948.652: 9.6113% ( 222) 00:07:23.930 5948.652 - 5973.858: 11.0073% ( 260) 00:07:23.930 5973.858 - 5999.065: 12.3819% ( 256) 00:07:23.930 5999.065 - 6024.271: 13.8316% ( 270) 00:07:23.930 6024.271 - 6049.477: 15.2384% ( 262) 00:07:23.930 6049.477 - 6074.683: 16.6935% ( 271) 00:07:23.930 6074.683 - 6099.889: 18.0949% ( 261) 00:07:23.930 6099.889 - 6125.095: 19.5608% ( 273) 00:07:23.930 6125.095 - 6150.302: 20.9890% ( 266) 00:07:23.930 6150.302 - 6175.508: 22.4119% ( 265) 00:07:23.930 6175.508 - 6200.714: 23.9207% ( 281) 00:07:23.930 6200.714 - 6225.920: 25.5584% ( 305) 00:07:23.930 6225.920 - 6251.126: 27.3357% ( 331) 00:07:23.930 6251.126 - 6276.332: 29.4029% ( 385) 00:07:23.930 6276.332 - 6301.538: 31.6098% ( 411) 00:07:23.930 6301.538 - 6326.745: 33.9669% ( 439) 00:07:23.930 6326.745 - 6351.951: 36.4315% ( 459) 00:07:23.930 6351.951 - 6377.157: 38.9068% ( 461) 00:07:23.930 6377.157 - 6402.363: 41.3821% ( 461) 00:07:23.930 6402.363 - 6427.569: 43.8574% ( 461) 00:07:23.930 6427.569 - 6452.775: 46.4347% ( 480) 00:07:23.930 6452.775 - 6503.188: 51.5571% ( 954) 00:07:23.930 6503.188 - 6553.600: 56.5507% ( 930) 00:07:23.930 6553.600 - 6604.012: 60.8409% ( 799) 00:07:23.930 6604.012 - 6654.425: 64.4223% ( 667) 00:07:23.930 6654.425 - 6704.837: 67.4560% ( 565) 00:07:23.930 6704.837 - 6755.249: 70.2105% ( 513) 00:07:23.930 6755.249 - 6805.662: 72.9167% ( 504) 00:07:23.930 6805.662 - 6856.074: 75.4940% ( 480) 00:07:23.930 6856.074 - 6906.486: 77.9639% ( 460) 00:07:23.930 6906.486 - 6956.898: 80.4285% ( 459) 00:07:23.930 6956.898 - 7007.311: 82.7964% ( 441) 00:07:23.930 7007.311 - 7057.723: 85.0462% ( 419) 00:07:23.930 7057.723 - 7108.135: 86.8986% ( 345) 00:07:23.930 7108.135 - 7158.548: 88.0906% ( 222) 00:07:23.930 7158.548 - 7208.960: 88.7081% ( 115) 00:07:23.930 7208.960 - 7259.372: 89.0195% ( 58) 00:07:23.930 7259.372 - 7309.785: 89.2182% ( 37) 00:07:23.930 7309.785 - 7360.197: 89.3739% ( 29) 00:07:23.930 7360.197 - 7410.609: 89.5350% ( 30) 00:07:23.930 7410.609 - 7461.022: 89.6746% ( 26) 00:07:23.930 7461.022 - 7511.434: 89.8464% ( 32) 00:07:23.930 7511.434 - 7561.846: 89.9968% ( 28) 00:07:23.930 7561.846 - 7612.258: 90.1364% ( 26) 00:07:23.930 7612.258 - 7662.671: 90.3243% ( 35) 00:07:23.930 7662.671 - 7713.083: 90.4532% ( 24) 00:07:23.930 7713.083 - 7763.495: 90.6089% ( 29) 00:07:23.930 7763.495 - 7813.908: 90.7163% ( 20) 00:07:23.930 7813.908 - 7864.320: 90.8022% ( 16) 00:07:23.930 7864.320 - 7914.732: 90.8774% ( 14) 00:07:23.930 7914.732 - 7965.145: 90.9579% ( 15) 00:07:23.930 7965.145 - 8015.557: 91.0438% ( 16) 00:07:23.930 8015.557 - 8065.969: 91.1458% ( 19) 00:07:23.930 8065.969 - 8116.382: 91.2371% ( 17) 00:07:23.930 8116.382 - 8166.794: 91.3338% ( 18) 00:07:23.930 8166.794 - 8217.206: 91.4304% ( 18) 00:07:23.930 8217.206 - 8267.618: 91.5110% ( 15) 00:07:23.930 8267.618 - 8318.031: 91.6022% ( 17) 00:07:23.930 8318.031 - 8368.443: 91.6774% ( 14) 00:07:23.930 8368.443 - 8418.855: 91.7687% ( 17) 00:07:23.930 8418.855 - 8469.268: 91.8385% ( 13) 00:07:23.930 8469.268 - 8519.680: 91.9244% ( 16) 00:07:23.930 8519.680 - 8570.092: 92.0049% ( 15) 00:07:23.930 8570.092 - 8620.505: 92.0909% ( 16) 00:07:23.930 8620.505 - 8670.917: 92.1929% ( 19) 00:07:23.930 8670.917 - 8721.329: 92.2841% ( 17) 00:07:23.930 8721.329 - 8771.742: 92.3647% ( 15) 00:07:23.930 8771.742 - 8822.154: 92.4936% ( 24) 00:07:23.930 8822.154 - 8872.566: 92.6385% ( 27) 00:07:23.930 8872.566 - 8922.978: 92.7835% ( 27) 00:07:23.930 8922.978 - 8973.391: 92.9446% ( 30) 00:07:23.930 8973.391 - 9023.803: 93.1057% ( 30) 00:07:23.930 9023.803 - 9074.215: 93.2614% ( 29) 00:07:23.930 9074.215 - 9124.628: 93.4117% ( 28) 00:07:23.930 9124.628 - 9175.040: 93.5782% ( 31) 00:07:23.930 9175.040 - 9225.452: 93.7554% ( 33) 00:07:23.930 9225.452 - 9275.865: 93.9379% ( 34) 00:07:23.930 9275.865 - 9326.277: 94.1473% ( 39) 00:07:23.930 9326.277 - 9376.689: 94.3943% ( 46) 00:07:23.930 9376.689 - 9427.102: 94.6198% ( 42) 00:07:23.930 9427.102 - 9477.514: 94.8293% ( 39) 00:07:23.930 9477.514 - 9527.926: 95.0440% ( 40) 00:07:23.930 9527.926 - 9578.338: 95.2481% ( 38) 00:07:23.930 9578.338 - 9628.751: 95.4467% ( 37) 00:07:23.930 9628.751 - 9679.163: 95.5971% ( 28) 00:07:23.930 9679.163 - 9729.575: 95.7421% ( 27) 00:07:23.930 9729.575 - 9779.988: 95.9031% ( 30) 00:07:23.930 9779.988 - 9830.400: 96.0481% ( 27) 00:07:23.930 9830.400 - 9880.812: 96.1931% ( 27) 00:07:23.931 9880.812 - 9931.225: 96.3220% ( 24) 00:07:23.931 9931.225 - 9981.637: 96.4830% ( 30) 00:07:23.931 9981.637 - 10032.049: 96.6119% ( 24) 00:07:23.931 10032.049 - 10082.462: 96.7408% ( 24) 00:07:23.931 10082.462 - 10132.874: 96.8320% ( 17) 00:07:23.931 10132.874 - 10183.286: 96.9018% ( 13) 00:07:23.931 10183.286 - 10233.698: 96.9878% ( 16) 00:07:23.931 10233.698 - 10284.111: 97.0629% ( 14) 00:07:23.931 10284.111 - 10334.523: 97.1596% ( 18) 00:07:23.931 10334.523 - 10384.935: 97.2616% ( 19) 00:07:23.931 10384.935 - 10435.348: 97.3529% ( 17) 00:07:23.931 10435.348 - 10485.760: 97.4388% ( 16) 00:07:23.931 10485.760 - 10536.172: 97.4979% ( 11) 00:07:23.931 10536.172 - 10586.585: 97.5623% ( 12) 00:07:23.931 10586.585 - 10636.997: 97.6267% ( 12) 00:07:23.931 10636.997 - 10687.409: 97.7019% ( 14) 00:07:23.931 10687.409 - 10737.822: 97.7771% ( 14) 00:07:23.931 10737.822 - 10788.234: 97.8415% ( 12) 00:07:23.931 10788.234 - 10838.646: 97.9006% ( 11) 00:07:23.931 10838.646 - 10889.058: 97.9328% ( 6) 00:07:23.931 10889.058 - 10939.471: 97.9865% ( 10) 00:07:23.931 10939.471 - 10989.883: 98.0455% ( 11) 00:07:23.931 10989.883 - 11040.295: 98.1100% ( 12) 00:07:23.931 11040.295 - 11090.708: 98.1637% ( 10) 00:07:23.931 11090.708 - 11141.120: 98.2120% ( 9) 00:07:23.931 11141.120 - 11191.532: 98.2549% ( 8) 00:07:23.931 11191.532 - 11241.945: 98.2872% ( 6) 00:07:23.931 11241.945 - 11292.357: 98.3194% ( 6) 00:07:23.931 11292.357 - 11342.769: 98.3516% ( 6) 00:07:23.931 11342.769 - 11393.182: 98.3945% ( 8) 00:07:23.931 11393.182 - 11443.594: 98.4268% ( 6) 00:07:23.931 11443.594 - 11494.006: 98.4590% ( 6) 00:07:23.931 11494.006 - 11544.418: 98.4966% ( 7) 00:07:23.931 11544.418 - 11594.831: 98.5341% ( 7) 00:07:23.931 11594.831 - 11645.243: 98.5610% ( 5) 00:07:23.931 11645.243 - 11695.655: 98.5878% ( 5) 00:07:23.931 11695.655 - 11746.068: 98.6201% ( 6) 00:07:23.931 11746.068 - 11796.480: 98.6523% ( 6) 00:07:23.931 11796.480 - 11846.892: 98.6738% ( 4) 00:07:23.931 11846.892 - 11897.305: 98.6845% ( 2) 00:07:23.931 11897.305 - 11947.717: 98.7006% ( 3) 00:07:23.931 11947.717 - 11998.129: 98.7113% ( 2) 00:07:23.931 11998.129 - 12048.542: 98.7221% ( 2) 00:07:23.931 12048.542 - 12098.954: 98.7382% ( 3) 00:07:23.931 12098.954 - 12149.366: 98.7489% ( 2) 00:07:23.931 12149.366 - 12199.778: 98.7650% ( 3) 00:07:23.931 12199.778 - 12250.191: 98.7758% ( 2) 00:07:23.931 12250.191 - 12300.603: 98.7865% ( 2) 00:07:23.931 12300.603 - 12351.015: 98.8026% ( 3) 00:07:23.931 12351.015 - 12401.428: 98.8187% ( 3) 00:07:23.931 12401.428 - 12451.840: 98.8509% ( 6) 00:07:23.931 12451.840 - 12502.252: 98.8832% ( 6) 00:07:23.931 12502.252 - 12552.665: 98.9207% ( 7) 00:07:23.931 12552.665 - 12603.077: 98.9530% ( 6) 00:07:23.931 12603.077 - 12653.489: 98.9798% ( 5) 00:07:23.931 12653.489 - 12703.902: 99.0120% ( 6) 00:07:23.931 12703.902 - 12754.314: 99.0496% ( 7) 00:07:23.931 12754.314 - 12804.726: 99.0765% ( 5) 00:07:23.931 12804.726 - 12855.138: 99.1087% ( 6) 00:07:23.931 12855.138 - 12905.551: 99.1355% ( 5) 00:07:23.931 12905.551 - 13006.375: 99.1946% ( 11) 00:07:23.931 13006.375 - 13107.200: 99.2483% ( 10) 00:07:23.931 13107.200 - 13208.025: 99.2859% ( 7) 00:07:23.931 13208.025 - 13308.849: 99.3127% ( 5) 00:07:23.931 18450.905 - 18551.729: 99.3235% ( 2) 00:07:23.931 18551.729 - 18652.554: 99.3449% ( 4) 00:07:23.931 18652.554 - 18753.378: 99.3664% ( 4) 00:07:23.931 18753.378 - 18854.203: 99.3879% ( 4) 00:07:23.931 18854.203 - 18955.028: 99.4147% ( 5) 00:07:23.931 18955.028 - 19055.852: 99.4362% ( 4) 00:07:23.931 19055.852 - 19156.677: 99.4577% ( 4) 00:07:23.931 19156.677 - 19257.502: 99.4792% ( 4) 00:07:23.931 19257.502 - 19358.326: 99.5006% ( 4) 00:07:23.931 19358.326 - 19459.151: 99.5221% ( 4) 00:07:23.931 19459.151 - 19559.975: 99.5436% ( 4) 00:07:23.931 19559.975 - 19660.800: 99.5651% ( 4) 00:07:23.931 19660.800 - 19761.625: 99.5866% ( 4) 00:07:23.931 19761.625 - 19862.449: 99.6080% ( 4) 00:07:23.931 19862.449 - 19963.274: 99.6295% ( 4) 00:07:23.931 19963.274 - 20064.098: 99.6564% ( 5) 00:07:23.931 22887.188 - 22988.012: 99.6725% ( 3) 00:07:23.931 22988.012 - 23088.837: 99.6939% ( 4) 00:07:23.931 23088.837 - 23189.662: 99.7154% ( 4) 00:07:23.931 23189.662 - 23290.486: 99.7369% ( 4) 00:07:23.931 23290.486 - 23391.311: 99.7637% ( 5) 00:07:23.931 23391.311 - 23492.135: 99.7852% ( 4) 00:07:23.931 23492.135 - 23592.960: 99.8067% ( 4) 00:07:23.931 23592.960 - 23693.785: 99.8282% ( 4) 00:07:23.931 23693.785 - 23794.609: 99.8443% ( 3) 00:07:23.931 23794.609 - 23895.434: 99.8658% ( 4) 00:07:23.931 23895.434 - 23996.258: 99.8926% ( 5) 00:07:23.931 23996.258 - 24097.083: 99.9141% ( 4) 00:07:23.931 24097.083 - 24197.908: 99.9356% ( 4) 00:07:23.931 24197.908 - 24298.732: 99.9570% ( 4) 00:07:23.931 24298.732 - 24399.557: 99.9839% ( 5) 00:07:23.931 24399.557 - 24500.382: 100.0000% ( 3) 00:07:23.931 00:07:23.931 04:27:46 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:25.308 Initializing NVMe Controllers 00:07:25.308 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:25.308 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:25.308 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:25.308 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:25.308 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:25.308 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:25.308 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:25.308 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:25.308 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:25.308 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:25.308 Initialization complete. Launching workers. 00:07:25.308 ======================================================== 00:07:25.308 Latency(us) 00:07:25.308 Device Information : IOPS MiB/s Average min max 00:07:25.308 PCIE (0000:00:10.0) NSID 1 from core 0: 17365.16 203.50 7378.77 5811.02 33189.28 00:07:25.308 PCIE (0000:00:11.0) NSID 1 from core 0: 17365.16 203.50 7366.99 5681.39 31322.81 00:07:25.308 PCIE (0000:00:13.0) NSID 1 from core 0: 17365.16 203.50 7355.11 5705.57 29836.07 00:07:25.308 PCIE (0000:00:12.0) NSID 1 from core 0: 17365.16 203.50 7343.39 5897.64 28146.90 00:07:25.308 PCIE (0000:00:12.0) NSID 2 from core 0: 17365.16 203.50 7331.83 5869.62 26378.48 00:07:25.308 PCIE (0000:00:12.0) NSID 3 from core 0: 17429.00 204.25 7293.48 5875.47 21093.49 00:07:25.308 ======================================================== 00:07:25.308 Total : 104254.80 1221.74 7344.90 5681.39 33189.28 00:07:25.308 00:07:25.308 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:25.308 ================================================================================= 00:07:25.308 1.00000% : 6125.095us 00:07:25.308 10.00000% : 6503.188us 00:07:25.308 25.00000% : 6654.425us 00:07:25.308 50.00000% : 6956.898us 00:07:25.308 75.00000% : 7561.846us 00:07:25.308 90.00000% : 8418.855us 00:07:25.308 95.00000% : 8822.154us 00:07:25.308 98.00000% : 10132.874us 00:07:25.308 99.00000% : 14115.446us 00:07:25.308 99.50000% : 27424.295us 00:07:25.308 99.90000% : 32868.825us 00:07:25.308 99.99000% : 33272.123us 00:07:25.308 99.99900% : 33272.123us 00:07:25.308 99.99990% : 33272.123us 00:07:25.308 99.99999% : 33272.123us 00:07:25.308 00:07:25.308 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:25.308 ================================================================================= 00:07:25.308 1.00000% : 6200.714us 00:07:25.308 10.00000% : 6553.600us 00:07:25.308 25.00000% : 6755.249us 00:07:25.308 50.00000% : 6956.898us 00:07:25.308 75.00000% : 7561.846us 00:07:25.308 90.00000% : 8368.443us 00:07:25.308 95.00000% : 8670.917us 00:07:25.308 98.00000% : 10233.698us 00:07:25.308 99.00000% : 14115.446us 00:07:25.308 99.50000% : 25811.102us 00:07:25.308 99.90000% : 31053.982us 00:07:25.308 99.99000% : 31457.280us 00:07:25.308 99.99900% : 31457.280us 00:07:25.308 99.99990% : 31457.280us 00:07:25.308 99.99999% : 31457.280us 00:07:25.308 00:07:25.308 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:25.308 ================================================================================= 00:07:25.308 1.00000% : 6200.714us 00:07:25.308 10.00000% : 6553.600us 00:07:25.308 25.00000% : 6755.249us 00:07:25.308 50.00000% : 6956.898us 00:07:25.308 75.00000% : 7612.258us 00:07:25.308 90.00000% : 8368.443us 00:07:25.308 95.00000% : 8670.917us 00:07:25.308 98.00000% : 10183.286us 00:07:25.308 99.00000% : 14216.271us 00:07:25.308 99.50000% : 24702.031us 00:07:25.308 99.90000% : 29440.788us 00:07:25.308 99.99000% : 29844.086us 00:07:25.308 99.99900% : 29844.086us 00:07:25.308 99.99990% : 29844.086us 00:07:25.308 99.99999% : 29844.086us 00:07:25.308 00:07:25.309 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:25.309 ================================================================================= 00:07:25.309 1.00000% : 6225.920us 00:07:25.309 10.00000% : 6553.600us 00:07:25.309 25.00000% : 6755.249us 00:07:25.309 50.00000% : 6956.898us 00:07:25.309 75.00000% : 7511.434us 00:07:25.309 90.00000% : 8318.031us 00:07:25.309 95.00000% : 8620.505us 00:07:25.309 98.00000% : 10233.698us 00:07:25.309 99.00000% : 13308.849us 00:07:25.309 99.50000% : 22887.188us 00:07:25.309 99.90000% : 27827.594us 00:07:25.309 99.99000% : 28230.892us 00:07:25.309 99.99900% : 28230.892us 00:07:25.309 99.99990% : 28230.892us 00:07:25.309 99.99999% : 28230.892us 00:07:25.309 00:07:25.309 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:25.309 ================================================================================= 00:07:25.309 1.00000% : 6225.920us 00:07:25.309 10.00000% : 6553.600us 00:07:25.309 25.00000% : 6755.249us 00:07:25.309 50.00000% : 6956.898us 00:07:25.309 75.00000% : 7511.434us 00:07:25.309 90.00000% : 8318.031us 00:07:25.309 95.00000% : 8721.329us 00:07:25.309 98.00000% : 9981.637us 00:07:25.309 99.00000% : 12905.551us 00:07:25.309 99.50000% : 21072.345us 00:07:25.309 99.90000% : 26012.751us 00:07:25.309 99.99000% : 26416.049us 00:07:25.309 99.99900% : 26416.049us 00:07:25.309 99.99990% : 26416.049us 00:07:25.309 99.99999% : 26416.049us 00:07:25.309 00:07:25.309 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:25.309 ================================================================================= 00:07:25.309 1.00000% : 6200.714us 00:07:25.309 10.00000% : 6553.600us 00:07:25.309 25.00000% : 6755.249us 00:07:25.309 50.00000% : 6956.898us 00:07:25.309 75.00000% : 7561.846us 00:07:25.309 90.00000% : 8368.443us 00:07:25.309 95.00000% : 8721.329us 00:07:25.309 98.00000% : 10132.874us 00:07:25.309 99.00000% : 13409.674us 00:07:25.309 99.50000% : 14821.218us 00:07:25.309 99.90000% : 20669.046us 00:07:25.309 99.99000% : 21072.345us 00:07:25.309 99.99900% : 21173.169us 00:07:25.309 99.99990% : 21173.169us 00:07:25.309 99.99999% : 21173.169us 00:07:25.309 00:07:25.309 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:25.309 ============================================================================== 00:07:25.309 Range in us Cumulative IO count 00:07:25.309 5797.415 - 5822.622: 0.0057% ( 1) 00:07:25.309 5847.828 - 5873.034: 0.0172% ( 2) 00:07:25.309 5898.240 - 5923.446: 0.0230% ( 1) 00:07:25.309 5923.446 - 5948.652: 0.0402% ( 3) 00:07:25.309 5948.652 - 5973.858: 0.1436% ( 18) 00:07:25.309 5973.858 - 5999.065: 0.1953% ( 9) 00:07:25.309 5999.065 - 6024.271: 0.3102% ( 20) 00:07:25.309 6024.271 - 6049.477: 0.4021% ( 16) 00:07:25.309 6049.477 - 6074.683: 0.6434% ( 42) 00:07:25.309 6074.683 - 6099.889: 0.8272% ( 32) 00:07:25.309 6099.889 - 6125.095: 1.1317% ( 53) 00:07:25.309 6125.095 - 6150.302: 1.4419% ( 54) 00:07:25.309 6150.302 - 6175.508: 1.6659% ( 39) 00:07:25.309 6175.508 - 6200.714: 1.9876% ( 56) 00:07:25.309 6200.714 - 6225.920: 2.3495% ( 63) 00:07:25.309 6225.920 - 6251.126: 2.7803% ( 75) 00:07:25.309 6251.126 - 6276.332: 3.3318% ( 96) 00:07:25.309 6276.332 - 6301.538: 4.0097% ( 118) 00:07:25.309 6301.538 - 6326.745: 4.5726% ( 98) 00:07:25.309 6326.745 - 6351.951: 5.2792% ( 123) 00:07:25.309 6351.951 - 6377.157: 6.0317% ( 131) 00:07:25.309 6377.157 - 6402.363: 7.0715% ( 181) 00:07:25.309 6402.363 - 6427.569: 8.1744% ( 192) 00:07:25.309 6427.569 - 6452.775: 9.3693% ( 208) 00:07:25.309 6452.775 - 6503.188: 12.2128% ( 495) 00:07:25.309 6503.188 - 6553.600: 15.8490% ( 633) 00:07:25.309 6553.600 - 6604.012: 20.4331% ( 798) 00:07:25.309 6604.012 - 6654.425: 25.6376% ( 906) 00:07:25.309 6654.425 - 6704.837: 30.3309% ( 817) 00:07:25.309 6704.837 - 6755.249: 35.5124% ( 902) 00:07:25.309 6755.249 - 6805.662: 39.7403% ( 736) 00:07:25.309 6805.662 - 6856.074: 43.9166% ( 727) 00:07:25.309 6856.074 - 6906.486: 47.7309% ( 664) 00:07:25.309 6906.486 - 6956.898: 50.9478% ( 560) 00:07:25.309 6956.898 - 7007.311: 53.9637% ( 525) 00:07:25.309 7007.311 - 7057.723: 56.8474% ( 502) 00:07:25.309 7057.723 - 7108.135: 59.4956% ( 461) 00:07:25.309 7108.135 - 7158.548: 61.9256% ( 423) 00:07:25.309 7158.548 - 7208.960: 64.3440% ( 421) 00:07:25.309 7208.960 - 7259.372: 66.4177% ( 361) 00:07:25.309 7259.372 - 7309.785: 68.1411% ( 300) 00:07:25.309 7309.785 - 7360.197: 70.1459% ( 349) 00:07:25.309 7360.197 - 7410.609: 71.7659% ( 282) 00:07:25.309 7410.609 - 7461.022: 73.2479% ( 258) 00:07:25.309 7461.022 - 7511.434: 74.3739% ( 196) 00:07:25.309 7511.434 - 7561.846: 75.2700% ( 156) 00:07:25.309 7561.846 - 7612.258: 76.0742% ( 140) 00:07:25.309 7612.258 - 7662.671: 76.6372% ( 98) 00:07:25.309 7662.671 - 7713.083: 77.3610% ( 126) 00:07:25.309 7713.083 - 7763.495: 77.9699% ( 106) 00:07:25.309 7763.495 - 7813.908: 78.8086% ( 146) 00:07:25.309 7813.908 - 7864.320: 79.4692% ( 115) 00:07:25.309 7864.320 - 7914.732: 80.0322% ( 98) 00:07:25.309 7914.732 - 7965.145: 81.1466% ( 194) 00:07:25.309 7965.145 - 8015.557: 82.4219% ( 222) 00:07:25.309 8015.557 - 8065.969: 84.0361% ( 281) 00:07:25.309 8065.969 - 8116.382: 85.2654% ( 214) 00:07:25.309 8116.382 - 8166.794: 86.4258% ( 202) 00:07:25.309 8166.794 - 8217.206: 87.3966% ( 169) 00:07:25.309 8217.206 - 8267.618: 88.4593% ( 185) 00:07:25.309 8267.618 - 8318.031: 89.2406% ( 136) 00:07:25.309 8318.031 - 8368.443: 89.9874% ( 130) 00:07:25.309 8368.443 - 8418.855: 90.6767% ( 120) 00:07:25.309 8418.855 - 8469.268: 91.3718% ( 121) 00:07:25.309 8469.268 - 8519.680: 92.1071% ( 128) 00:07:25.309 8519.680 - 8570.092: 92.7562% ( 113) 00:07:25.309 8570.092 - 8620.505: 93.4283% ( 117) 00:07:25.309 8620.505 - 8670.917: 93.9683% ( 94) 00:07:25.309 8670.917 - 8721.329: 94.4508% ( 84) 00:07:25.309 8721.329 - 8771.742: 94.8874% ( 76) 00:07:25.309 8771.742 - 8822.154: 95.3010% ( 72) 00:07:25.309 8822.154 - 8872.566: 95.6572% ( 62) 00:07:25.309 8872.566 - 8922.978: 95.9501% ( 51) 00:07:25.309 8922.978 - 8973.391: 96.3235% ( 65) 00:07:25.309 8973.391 - 9023.803: 96.5131% ( 33) 00:07:25.309 9023.803 - 9074.215: 96.8061% ( 51) 00:07:25.309 9074.215 - 9124.628: 96.9727% ( 29) 00:07:25.309 9124.628 - 9175.040: 97.0875% ( 20) 00:07:25.309 9175.040 - 9225.452: 97.2426% ( 27) 00:07:25.309 9225.452 - 9275.865: 97.3058% ( 11) 00:07:25.309 9275.865 - 9326.277: 97.3748% ( 12) 00:07:25.309 9326.277 - 9376.689: 97.4380% ( 11) 00:07:25.309 9376.689 - 9427.102: 97.5011% ( 11) 00:07:25.309 9427.102 - 9477.514: 97.5586% ( 10) 00:07:25.309 9477.514 - 9527.926: 97.6275% ( 12) 00:07:25.309 9527.926 - 9578.338: 97.6620% ( 6) 00:07:25.309 9578.338 - 9628.751: 97.7137% ( 9) 00:07:25.309 9628.751 - 9679.163: 97.7424% ( 5) 00:07:25.309 9679.163 - 9729.575: 97.7769% ( 6) 00:07:25.309 9729.575 - 9779.988: 97.7941% ( 3) 00:07:25.309 9931.225 - 9981.637: 97.8228% ( 5) 00:07:25.309 9981.637 - 10032.049: 97.8803% ( 10) 00:07:25.309 10032.049 - 10082.462: 97.9779% ( 17) 00:07:25.309 10082.462 - 10132.874: 98.0354% ( 10) 00:07:25.309 10132.874 - 10183.286: 98.0526% ( 3) 00:07:25.309 10183.286 - 10233.698: 98.0699% ( 3) 00:07:25.309 10233.698 - 10284.111: 98.0871% ( 3) 00:07:25.309 10284.111 - 10334.523: 98.1101% ( 4) 00:07:25.309 10334.523 - 10384.935: 98.1618% ( 9) 00:07:25.309 10384.935 - 10435.348: 98.2192% ( 10) 00:07:25.309 10435.348 - 10485.760: 98.2709% ( 9) 00:07:25.309 10485.760 - 10536.172: 98.3054% ( 6) 00:07:25.309 10536.172 - 10586.585: 98.3456% ( 7) 00:07:25.309 10586.585 - 10636.997: 98.3686% ( 4) 00:07:25.309 10636.997 - 10687.409: 98.3915% ( 4) 00:07:25.309 10687.409 - 10737.822: 98.4203% ( 5) 00:07:25.309 10737.822 - 10788.234: 98.4375% ( 3) 00:07:25.309 10788.234 - 10838.646: 98.4490% ( 2) 00:07:25.309 10838.646 - 10889.058: 98.4605% ( 2) 00:07:25.309 10889.058 - 10939.471: 98.4720% ( 2) 00:07:25.309 10939.471 - 10989.883: 98.4835% ( 2) 00:07:25.309 10989.883 - 11040.295: 98.4892% ( 1) 00:07:25.309 11040.295 - 11090.708: 98.5179% ( 5) 00:07:25.309 11090.708 - 11141.120: 98.5409% ( 4) 00:07:25.309 11141.120 - 11191.532: 98.5581% ( 3) 00:07:25.309 11191.532 - 11241.945: 98.6213% ( 11) 00:07:25.309 11241.945 - 11292.357: 98.6673% ( 8) 00:07:25.309 11292.357 - 11342.769: 98.6788% ( 2) 00:07:25.309 11342.769 - 11393.182: 98.6903% ( 2) 00:07:25.309 11393.182 - 11443.594: 98.7132% ( 4) 00:07:25.309 11443.594 - 11494.006: 98.7420% ( 5) 00:07:25.309 11494.006 - 11544.418: 98.7534% ( 2) 00:07:25.309 11544.418 - 11594.831: 98.7649% ( 2) 00:07:25.309 11594.831 - 11645.243: 98.7764% ( 2) 00:07:25.309 11645.243 - 11695.655: 98.7937% ( 3) 00:07:25.309 11695.655 - 11746.068: 98.8166% ( 4) 00:07:25.309 11746.068 - 11796.480: 98.8281% ( 2) 00:07:25.309 11796.480 - 11846.892: 98.8339% ( 1) 00:07:25.309 11846.892 - 11897.305: 98.8454% ( 2) 00:07:25.309 11998.129 - 12048.542: 98.8626% ( 3) 00:07:25.309 12048.542 - 12098.954: 98.8683% ( 1) 00:07:25.309 12098.954 - 12149.366: 98.8741% ( 1) 00:07:25.309 12149.366 - 12199.778: 98.8798% ( 1) 00:07:25.309 12199.778 - 12250.191: 98.8913% ( 2) 00:07:25.309 12250.191 - 12300.603: 98.8971% ( 1) 00:07:25.309 13611.323 - 13712.148: 98.9028% ( 1) 00:07:25.309 13812.972 - 13913.797: 98.9258% ( 4) 00:07:25.309 13913.797 - 14014.622: 98.9660% ( 7) 00:07:25.309 14014.622 - 14115.446: 99.0694% ( 18) 00:07:25.309 14115.446 - 14216.271: 99.0981% ( 5) 00:07:25.309 14216.271 - 14317.095: 99.1441% ( 8) 00:07:25.309 14317.095 - 14417.920: 99.1728% ( 5) 00:07:25.309 14417.920 - 14518.745: 99.2130% ( 7) 00:07:25.309 14518.745 - 14619.569: 99.2475% ( 6) 00:07:25.309 14619.569 - 14720.394: 99.2590% ( 2) 00:07:25.309 14720.394 - 14821.218: 99.2647% ( 1) 00:07:25.310 26617.698 - 26819.348: 99.2992% ( 6) 00:07:25.310 26819.348 - 27020.997: 99.3796% ( 14) 00:07:25.310 27020.997 - 27222.646: 99.4773% ( 17) 00:07:25.310 27222.646 - 27424.295: 99.5347% ( 10) 00:07:25.310 27424.295 - 27625.945: 99.5634% ( 5) 00:07:25.310 27625.945 - 27827.594: 99.6094% ( 8) 00:07:25.310 28029.243 - 28230.892: 99.6324% ( 4) 00:07:25.310 31255.631 - 31457.280: 99.6381% ( 1) 00:07:25.310 31457.280 - 31658.929: 99.6783% ( 7) 00:07:25.310 31658.929 - 31860.578: 99.7243% ( 8) 00:07:25.310 31860.578 - 32062.228: 99.7702% ( 8) 00:07:25.310 32062.228 - 32263.877: 99.8162% ( 8) 00:07:25.310 32263.877 - 32465.526: 99.8564% ( 7) 00:07:25.310 32465.526 - 32667.175: 99.8966% ( 7) 00:07:25.310 32667.175 - 32868.825: 99.9311% ( 6) 00:07:25.310 32868.825 - 33070.474: 99.9770% ( 8) 00:07:25.310 33070.474 - 33272.123: 100.0000% ( 4) 00:07:25.310 00:07:25.310 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:25.310 ============================================================================== 00:07:25.310 Range in us Cumulative IO count 00:07:25.310 5671.385 - 5696.591: 0.0057% ( 1) 00:07:25.310 5772.209 - 5797.415: 0.0115% ( 1) 00:07:25.310 5898.240 - 5923.446: 0.0172% ( 1) 00:07:25.310 5923.446 - 5948.652: 0.0402% ( 4) 00:07:25.310 5948.652 - 5973.858: 0.0517% ( 2) 00:07:25.310 5973.858 - 5999.065: 0.0689% ( 3) 00:07:25.310 5999.065 - 6024.271: 0.1149% ( 8) 00:07:25.310 6024.271 - 6049.477: 0.1666% ( 9) 00:07:25.310 6049.477 - 6074.683: 0.2413% ( 13) 00:07:25.310 6074.683 - 6099.889: 0.3619% ( 21) 00:07:25.310 6099.889 - 6125.095: 0.5285% ( 29) 00:07:25.310 6125.095 - 6150.302: 0.6778% ( 26) 00:07:25.310 6150.302 - 6175.508: 0.8387% ( 28) 00:07:25.310 6175.508 - 6200.714: 1.0110% ( 30) 00:07:25.310 6200.714 - 6225.920: 1.2121% ( 35) 00:07:25.310 6225.920 - 6251.126: 1.4017% ( 33) 00:07:25.310 6251.126 - 6276.332: 1.7061% ( 53) 00:07:25.310 6276.332 - 6301.538: 2.0852% ( 66) 00:07:25.310 6301.538 - 6326.745: 2.8205% ( 128) 00:07:25.310 6326.745 - 6351.951: 3.3490% ( 92) 00:07:25.310 6351.951 - 6377.157: 3.9465% ( 104) 00:07:25.310 6377.157 - 6402.363: 4.5209% ( 100) 00:07:25.310 6402.363 - 6427.569: 5.2390% ( 125) 00:07:25.310 6427.569 - 6452.775: 6.1638% ( 161) 00:07:25.310 6452.775 - 6503.188: 8.0767% ( 333) 00:07:25.310 6503.188 - 6553.600: 10.7996% ( 474) 00:07:25.310 6553.600 - 6604.012: 14.0568% ( 567) 00:07:25.310 6604.012 - 6654.425: 18.1296% ( 709) 00:07:25.310 6654.425 - 6704.837: 23.9832% ( 1019) 00:07:25.310 6704.837 - 6755.249: 30.4975% ( 1134) 00:07:25.310 6755.249 - 6805.662: 36.6613% ( 1073) 00:07:25.310 6805.662 - 6856.074: 43.6523% ( 1217) 00:07:25.310 6856.074 - 6906.486: 49.5577% ( 1028) 00:07:25.310 6906.486 - 6956.898: 55.3022% ( 1000) 00:07:25.310 6956.898 - 7007.311: 59.6622% ( 759) 00:07:25.310 7007.311 - 7057.723: 63.1721% ( 611) 00:07:25.310 7057.723 - 7108.135: 65.8892% ( 473) 00:07:25.310 7108.135 - 7158.548: 68.1181% ( 388) 00:07:25.310 7158.548 - 7208.960: 69.6289% ( 263) 00:07:25.310 7208.960 - 7259.372: 70.9099% ( 223) 00:07:25.310 7259.372 - 7309.785: 71.7084% ( 139) 00:07:25.310 7309.785 - 7360.197: 72.3805% ( 117) 00:07:25.310 7360.197 - 7410.609: 72.8918% ( 89) 00:07:25.310 7410.609 - 7461.022: 73.6328% ( 129) 00:07:25.310 7461.022 - 7511.434: 74.4773% ( 147) 00:07:25.310 7511.434 - 7561.846: 75.2872% ( 141) 00:07:25.310 7561.846 - 7612.258: 75.8387% ( 96) 00:07:25.310 7612.258 - 7662.671: 76.4591% ( 108) 00:07:25.310 7662.671 - 7713.083: 76.9818% ( 91) 00:07:25.310 7713.083 - 7763.495: 77.4759% ( 86) 00:07:25.310 7763.495 - 7813.908: 78.1767% ( 122) 00:07:25.310 7813.908 - 7864.320: 78.8948% ( 125) 00:07:25.310 7864.320 - 7914.732: 79.8771% ( 171) 00:07:25.310 7914.732 - 7965.145: 80.6641% ( 137) 00:07:25.310 7965.145 - 8015.557: 81.5142% ( 148) 00:07:25.310 8015.557 - 8065.969: 82.5770% ( 185) 00:07:25.310 8065.969 - 8116.382: 83.5880% ( 176) 00:07:25.310 8116.382 - 8166.794: 84.6795% ( 190) 00:07:25.310 8166.794 - 8217.206: 86.4660% ( 311) 00:07:25.310 8217.206 - 8267.618: 88.1032% ( 285) 00:07:25.310 8267.618 - 8318.031: 89.5163% ( 246) 00:07:25.310 8318.031 - 8368.443: 91.2684% ( 305) 00:07:25.310 8368.443 - 8418.855: 92.1932% ( 161) 00:07:25.310 8418.855 - 8469.268: 93.0205% ( 144) 00:07:25.310 8469.268 - 8519.680: 93.7385% ( 125) 00:07:25.310 8519.680 - 8570.092: 94.3187% ( 101) 00:07:25.310 8570.092 - 8620.505: 94.8012% ( 84) 00:07:25.310 8620.505 - 8670.917: 95.0483% ( 43) 00:07:25.310 8670.917 - 8721.329: 95.2436% ( 34) 00:07:25.310 8721.329 - 8771.742: 95.4733% ( 40) 00:07:25.310 8771.742 - 8822.154: 95.8525% ( 66) 00:07:25.310 8822.154 - 8872.566: 96.0018% ( 26) 00:07:25.310 8872.566 - 8922.978: 96.1569% ( 27) 00:07:25.310 8922.978 - 8973.391: 96.2948% ( 24) 00:07:25.310 8973.391 - 9023.803: 96.4097% ( 20) 00:07:25.310 9023.803 - 9074.215: 96.5763% ( 29) 00:07:25.310 9074.215 - 9124.628: 96.8176% ( 42) 00:07:25.310 9124.628 - 9175.040: 96.9095% ( 16) 00:07:25.310 9175.040 - 9225.452: 96.9956% ( 15) 00:07:25.310 9225.452 - 9275.865: 97.0761% ( 14) 00:07:25.310 9275.865 - 9326.277: 97.1565% ( 14) 00:07:25.310 9326.277 - 9376.689: 97.2139% ( 10) 00:07:25.310 9376.689 - 9427.102: 97.2714% ( 10) 00:07:25.310 9427.102 - 9477.514: 97.3518% ( 14) 00:07:25.310 9477.514 - 9527.926: 97.4954% ( 25) 00:07:25.310 9527.926 - 9578.338: 97.5988% ( 18) 00:07:25.310 9578.338 - 9628.751: 97.6792% ( 14) 00:07:25.310 9628.751 - 9679.163: 97.7137% ( 6) 00:07:25.310 9679.163 - 9729.575: 97.7482% ( 6) 00:07:25.310 9729.575 - 9779.988: 97.7654% ( 3) 00:07:25.310 9779.988 - 9830.400: 97.7826% ( 3) 00:07:25.310 9830.400 - 9880.812: 97.7941% ( 2) 00:07:25.310 9931.225 - 9981.637: 97.7999% ( 1) 00:07:25.310 9981.637 - 10032.049: 97.8114% ( 2) 00:07:25.310 10032.049 - 10082.462: 97.8458% ( 6) 00:07:25.310 10082.462 - 10132.874: 97.8688% ( 4) 00:07:25.310 10132.874 - 10183.286: 97.9320% ( 11) 00:07:25.310 10183.286 - 10233.698: 98.0813% ( 26) 00:07:25.310 10233.698 - 10284.111: 98.0986% ( 3) 00:07:25.310 10284.111 - 10334.523: 98.1158% ( 3) 00:07:25.310 10334.523 - 10384.935: 98.1330% ( 3) 00:07:25.310 10384.935 - 10435.348: 98.1503% ( 3) 00:07:25.310 10435.348 - 10485.760: 98.1618% ( 2) 00:07:25.310 10536.172 - 10586.585: 98.1675% ( 1) 00:07:25.310 10586.585 - 10636.997: 98.1847% ( 3) 00:07:25.310 10636.997 - 10687.409: 98.2364% ( 9) 00:07:25.310 10687.409 - 10737.822: 98.2824% ( 8) 00:07:25.310 10737.822 - 10788.234: 98.3111% ( 5) 00:07:25.310 10788.234 - 10838.646: 98.4662% ( 27) 00:07:25.310 10838.646 - 10889.058: 98.5007% ( 6) 00:07:25.310 10889.058 - 10939.471: 98.5064% ( 1) 00:07:25.310 10939.471 - 10989.883: 98.5352% ( 5) 00:07:25.310 10989.883 - 11040.295: 98.5639% ( 5) 00:07:25.310 11040.295 - 11090.708: 98.6156% ( 9) 00:07:25.310 11090.708 - 11141.120: 98.7477% ( 23) 00:07:25.310 11141.120 - 11191.532: 98.7592% ( 2) 00:07:25.310 11191.532 - 11241.945: 98.7764% ( 3) 00:07:25.310 11241.945 - 11292.357: 98.7994% ( 4) 00:07:25.310 11292.357 - 11342.769: 98.8224% ( 4) 00:07:25.310 11342.769 - 11393.182: 98.8454% ( 4) 00:07:25.310 11393.182 - 11443.594: 98.8626% ( 3) 00:07:25.310 11443.594 - 11494.006: 98.8798% ( 3) 00:07:25.310 11494.006 - 11544.418: 98.8971% ( 3) 00:07:25.310 13812.972 - 13913.797: 98.9028% ( 1) 00:07:25.310 13913.797 - 14014.622: 98.9085% ( 1) 00:07:25.310 14014.622 - 14115.446: 99.0407% ( 23) 00:07:25.310 14115.446 - 14216.271: 99.0981% ( 10) 00:07:25.310 14216.271 - 14317.095: 99.1958% ( 17) 00:07:25.310 14317.095 - 14417.920: 99.2360% ( 7) 00:07:25.310 14417.920 - 14518.745: 99.2647% ( 5) 00:07:25.310 24702.031 - 24802.855: 99.2877% ( 4) 00:07:25.310 24802.855 - 24903.680: 99.3107% ( 4) 00:07:25.310 24903.680 - 25004.505: 99.3336% ( 4) 00:07:25.310 25004.505 - 25105.329: 99.3566% ( 4) 00:07:25.310 25105.329 - 25206.154: 99.3796% ( 4) 00:07:25.310 25206.154 - 25306.978: 99.4026% ( 4) 00:07:25.310 25306.978 - 25407.803: 99.4256% ( 4) 00:07:25.310 25407.803 - 25508.628: 99.4485% ( 4) 00:07:25.310 25508.628 - 25609.452: 99.4715% ( 4) 00:07:25.310 25609.452 - 25710.277: 99.4887% ( 3) 00:07:25.310 25710.277 - 25811.102: 99.5117% ( 4) 00:07:25.310 25811.102 - 26012.751: 99.5577% ( 8) 00:07:25.310 26012.751 - 26214.400: 99.6036% ( 8) 00:07:25.310 26214.400 - 26416.049: 99.6324% ( 5) 00:07:25.310 29642.437 - 29844.086: 99.6553% ( 4) 00:07:25.310 29844.086 - 30045.735: 99.7013% ( 8) 00:07:25.310 30045.735 - 30247.385: 99.7472% ( 8) 00:07:25.310 30247.385 - 30449.034: 99.7989% ( 9) 00:07:25.310 30449.034 - 30650.683: 99.8449% ( 8) 00:07:25.310 30650.683 - 30852.332: 99.8909% ( 8) 00:07:25.310 30852.332 - 31053.982: 99.9368% ( 8) 00:07:25.310 31053.982 - 31255.631: 99.9828% ( 8) 00:07:25.310 31255.631 - 31457.280: 100.0000% ( 3) 00:07:25.310 00:07:25.310 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:25.310 ============================================================================== 00:07:25.310 Range in us Cumulative IO count 00:07:25.310 5696.591 - 5721.797: 0.0057% ( 1) 00:07:25.310 5747.003 - 5772.209: 0.0115% ( 1) 00:07:25.310 5797.415 - 5822.622: 0.0172% ( 1) 00:07:25.310 5898.240 - 5923.446: 0.0230% ( 1) 00:07:25.310 5923.446 - 5948.652: 0.0517% ( 5) 00:07:25.310 5948.652 - 5973.858: 0.1034% ( 9) 00:07:25.310 5973.858 - 5999.065: 0.1608% ( 10) 00:07:25.310 5999.065 - 6024.271: 0.2183% ( 10) 00:07:25.310 6024.271 - 6049.477: 0.3447% ( 22) 00:07:25.310 6049.477 - 6074.683: 0.4366% ( 16) 00:07:25.310 6074.683 - 6099.889: 0.5859% ( 26) 00:07:25.310 6099.889 - 6125.095: 0.7066% ( 21) 00:07:25.311 6125.095 - 6150.302: 0.8100% ( 18) 00:07:25.311 6150.302 - 6175.508: 0.9421% ( 23) 00:07:25.311 6175.508 - 6200.714: 1.1259% ( 32) 00:07:25.311 6200.714 - 6225.920: 1.3500% ( 39) 00:07:25.311 6225.920 - 6251.126: 1.7119% ( 63) 00:07:25.311 6251.126 - 6276.332: 2.0106% ( 52) 00:07:25.311 6276.332 - 6301.538: 2.3208% ( 54) 00:07:25.311 6301.538 - 6326.745: 2.7574% ( 76) 00:07:25.311 6326.745 - 6351.951: 3.1193% ( 63) 00:07:25.311 6351.951 - 6377.157: 3.5041% ( 67) 00:07:25.311 6377.157 - 6402.363: 4.1992% ( 121) 00:07:25.311 6402.363 - 6427.569: 5.0322% ( 145) 00:07:25.311 6427.569 - 6452.775: 6.0892% ( 184) 00:07:25.311 6452.775 - 6503.188: 8.1284% ( 355) 00:07:25.311 6503.188 - 6553.600: 10.2884% ( 376) 00:07:25.311 6553.600 - 6604.012: 13.5972% ( 576) 00:07:25.311 6604.012 - 6654.425: 17.8711% ( 744) 00:07:25.311 6654.425 - 6704.837: 23.0699% ( 905) 00:07:25.311 6704.837 - 6755.249: 29.0499% ( 1041) 00:07:25.311 6755.249 - 6805.662: 36.2592% ( 1255) 00:07:25.311 6805.662 - 6856.074: 42.8883% ( 1154) 00:07:25.311 6856.074 - 6906.486: 49.4600% ( 1144) 00:07:25.311 6906.486 - 6956.898: 54.8713% ( 942) 00:07:25.311 6956.898 - 7007.311: 59.5646% ( 817) 00:07:25.311 7007.311 - 7057.723: 63.8557% ( 747) 00:07:25.311 7057.723 - 7108.135: 66.6073% ( 479) 00:07:25.311 7108.135 - 7158.548: 68.4858% ( 327) 00:07:25.311 7158.548 - 7208.960: 70.1631% ( 292) 00:07:25.311 7208.960 - 7259.372: 71.1799% ( 177) 00:07:25.311 7259.372 - 7309.785: 71.9037% ( 126) 00:07:25.311 7309.785 - 7360.197: 72.7080% ( 140) 00:07:25.311 7360.197 - 7410.609: 73.4605% ( 131) 00:07:25.311 7410.609 - 7461.022: 73.9832% ( 91) 00:07:25.311 7461.022 - 7511.434: 74.5060% ( 91) 00:07:25.311 7511.434 - 7561.846: 74.9828% ( 83) 00:07:25.311 7561.846 - 7612.258: 75.8789% ( 156) 00:07:25.311 7612.258 - 7662.671: 76.6142% ( 128) 00:07:25.311 7662.671 - 7713.083: 77.1197% ( 88) 00:07:25.311 7713.083 - 7763.495: 77.7401% ( 108) 00:07:25.311 7763.495 - 7813.908: 78.5731% ( 145) 00:07:25.311 7813.908 - 7864.320: 79.4577% ( 154) 00:07:25.311 7864.320 - 7914.732: 80.5664% ( 193) 00:07:25.311 7914.732 - 7965.145: 81.7210% ( 201) 00:07:25.311 7965.145 - 8015.557: 82.6574% ( 163) 00:07:25.311 8015.557 - 8065.969: 83.5593% ( 157) 00:07:25.311 8065.969 - 8116.382: 84.6795% ( 195) 00:07:25.311 8116.382 - 8166.794: 85.8169% ( 198) 00:07:25.311 8166.794 - 8217.206: 87.0519% ( 215) 00:07:25.311 8217.206 - 8267.618: 88.6432% ( 277) 00:07:25.311 8267.618 - 8318.031: 89.8610% ( 212) 00:07:25.311 8318.031 - 8368.443: 90.9122% ( 183) 00:07:25.311 8368.443 - 8418.855: 91.8026% ( 155) 00:07:25.311 8418.855 - 8469.268: 92.6988% ( 156) 00:07:25.311 8469.268 - 8519.680: 93.3824% ( 119) 00:07:25.311 8519.680 - 8570.092: 94.0487% ( 116) 00:07:25.311 8570.092 - 8620.505: 94.7438% ( 121) 00:07:25.311 8620.505 - 8670.917: 95.1689% ( 74) 00:07:25.311 8670.917 - 8721.329: 95.4676% ( 52) 00:07:25.311 8721.329 - 8771.742: 95.7089% ( 42) 00:07:25.311 8771.742 - 8822.154: 95.9903% ( 49) 00:07:25.311 8822.154 - 8872.566: 96.1397% ( 26) 00:07:25.311 8872.566 - 8922.978: 96.2776% ( 24) 00:07:25.311 8922.978 - 8973.391: 96.3982% ( 21) 00:07:25.311 8973.391 - 9023.803: 96.5188% ( 21) 00:07:25.311 9023.803 - 9074.215: 96.6452% ( 22) 00:07:25.311 9074.215 - 9124.628: 96.7429% ( 17) 00:07:25.311 9124.628 - 9175.040: 96.8290% ( 15) 00:07:25.311 9175.040 - 9225.452: 96.9267% ( 17) 00:07:25.311 9225.452 - 9275.865: 97.0761% ( 26) 00:07:25.311 9275.865 - 9326.277: 97.1278% ( 9) 00:07:25.311 9326.277 - 9376.689: 97.1622% ( 6) 00:07:25.311 9376.689 - 9427.102: 97.2024% ( 7) 00:07:25.311 9427.102 - 9477.514: 97.2771% ( 13) 00:07:25.311 9477.514 - 9527.926: 97.3403% ( 11) 00:07:25.311 9527.926 - 9578.338: 97.4150% ( 13) 00:07:25.311 9578.338 - 9628.751: 97.4954% ( 14) 00:07:25.311 9628.751 - 9679.163: 97.6218% ( 22) 00:07:25.311 9679.163 - 9729.575: 97.7252% ( 18) 00:07:25.311 9729.575 - 9779.988: 97.7826% ( 10) 00:07:25.311 9779.988 - 9830.400: 97.7999% ( 3) 00:07:25.311 9981.637 - 10032.049: 97.8286% ( 5) 00:07:25.311 10032.049 - 10082.462: 97.8631% ( 6) 00:07:25.311 10082.462 - 10132.874: 97.8918% ( 5) 00:07:25.311 10132.874 - 10183.286: 98.0699% ( 31) 00:07:25.311 10183.286 - 10233.698: 98.0928% ( 4) 00:07:25.311 10233.698 - 10284.111: 98.1043% ( 2) 00:07:25.311 10284.111 - 10334.523: 98.1216% ( 3) 00:07:25.311 10334.523 - 10384.935: 98.1560% ( 6) 00:07:25.311 10384.935 - 10435.348: 98.1733% ( 3) 00:07:25.311 10485.760 - 10536.172: 98.1790% ( 1) 00:07:25.311 10586.585 - 10636.997: 98.1847% ( 1) 00:07:25.311 10687.409 - 10737.822: 98.2135% ( 5) 00:07:25.311 10737.822 - 10788.234: 98.2479% ( 6) 00:07:25.311 10788.234 - 10838.646: 98.2881% ( 7) 00:07:25.311 10838.646 - 10889.058: 98.3284% ( 7) 00:07:25.311 10889.058 - 10939.471: 98.3686% ( 7) 00:07:25.311 10939.471 - 10989.883: 98.4030% ( 6) 00:07:25.311 10989.883 - 11040.295: 98.5007% ( 17) 00:07:25.311 11040.295 - 11090.708: 98.5524% ( 9) 00:07:25.311 11090.708 - 11141.120: 98.5811% ( 5) 00:07:25.311 11141.120 - 11191.532: 98.6443% ( 11) 00:07:25.311 11191.532 - 11241.945: 98.6788% ( 6) 00:07:25.311 11241.945 - 11292.357: 98.7305% ( 9) 00:07:25.311 11292.357 - 11342.769: 98.7764% ( 8) 00:07:25.311 11342.769 - 11393.182: 98.8224% ( 8) 00:07:25.311 11393.182 - 11443.594: 98.8511% ( 5) 00:07:25.311 11443.594 - 11494.006: 98.8741% ( 4) 00:07:25.311 11494.006 - 11544.418: 98.8856% ( 2) 00:07:25.311 11544.418 - 11594.831: 98.8971% ( 2) 00:07:25.311 13913.797 - 14014.622: 98.9143% ( 3) 00:07:25.311 14014.622 - 14115.446: 98.9602% ( 8) 00:07:25.311 14115.446 - 14216.271: 99.0349% ( 13) 00:07:25.311 14216.271 - 14317.095: 99.1211% ( 15) 00:07:25.311 14317.095 - 14417.920: 99.1613% ( 7) 00:07:25.311 14417.920 - 14518.745: 99.1785% ( 3) 00:07:25.311 14518.745 - 14619.569: 99.2073% ( 5) 00:07:25.311 14619.569 - 14720.394: 99.2360% ( 5) 00:07:25.311 14720.394 - 14821.218: 99.2647% ( 5) 00:07:25.311 23391.311 - 23492.135: 99.2762% ( 2) 00:07:25.311 23492.135 - 23592.960: 99.2934% ( 3) 00:07:25.311 23592.960 - 23693.785: 99.3107% ( 3) 00:07:25.311 23693.785 - 23794.609: 99.3222% ( 2) 00:07:25.311 23794.609 - 23895.434: 99.3394% ( 3) 00:07:25.311 23895.434 - 23996.258: 99.3624% ( 4) 00:07:25.311 23996.258 - 24097.083: 99.3853% ( 4) 00:07:25.311 24097.083 - 24197.908: 99.4083% ( 4) 00:07:25.311 24197.908 - 24298.732: 99.4256% ( 3) 00:07:25.311 24298.732 - 24399.557: 99.4485% ( 4) 00:07:25.311 24399.557 - 24500.382: 99.4715% ( 4) 00:07:25.311 24500.382 - 24601.206: 99.4945% ( 4) 00:07:25.311 24601.206 - 24702.031: 99.5175% ( 4) 00:07:25.311 24702.031 - 24802.855: 99.5404% ( 4) 00:07:25.311 24802.855 - 24903.680: 99.5634% ( 4) 00:07:25.311 24903.680 - 25004.505: 99.5921% ( 5) 00:07:25.311 25004.505 - 25105.329: 99.6151% ( 4) 00:07:25.311 25105.329 - 25206.154: 99.6324% ( 3) 00:07:25.311 28029.243 - 28230.892: 99.6381% ( 1) 00:07:25.311 28230.892 - 28432.542: 99.6841% ( 8) 00:07:25.311 28432.542 - 28634.191: 99.7243% ( 7) 00:07:25.311 28634.191 - 28835.840: 99.7702% ( 8) 00:07:25.311 28835.840 - 29037.489: 99.8162% ( 8) 00:07:25.311 29037.489 - 29239.138: 99.8621% ( 8) 00:07:25.311 29239.138 - 29440.788: 99.9081% ( 8) 00:07:25.311 29440.788 - 29642.437: 99.9540% ( 8) 00:07:25.311 29642.437 - 29844.086: 100.0000% ( 8) 00:07:25.311 00:07:25.311 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:25.311 ============================================================================== 00:07:25.311 Range in us Cumulative IO count 00:07:25.311 5873.034 - 5898.240: 0.0057% ( 1) 00:07:25.311 5898.240 - 5923.446: 0.0172% ( 2) 00:07:25.311 5948.652 - 5973.858: 0.0287% ( 2) 00:07:25.311 5973.858 - 5999.065: 0.0345% ( 1) 00:07:25.311 5999.065 - 6024.271: 0.0402% ( 1) 00:07:25.311 6024.271 - 6049.477: 0.0574% ( 3) 00:07:25.311 6049.477 - 6074.683: 0.1321% ( 13) 00:07:25.311 6074.683 - 6099.889: 0.2125% ( 14) 00:07:25.311 6099.889 - 6125.095: 0.2872% ( 13) 00:07:25.311 6125.095 - 6150.302: 0.4136% ( 22) 00:07:25.311 6150.302 - 6175.508: 0.5802% ( 29) 00:07:25.311 6175.508 - 6200.714: 0.9306% ( 61) 00:07:25.311 6200.714 - 6225.920: 1.2925% ( 63) 00:07:25.311 6225.920 - 6251.126: 1.5970% ( 53) 00:07:25.311 6251.126 - 6276.332: 1.9646% ( 64) 00:07:25.311 6276.332 - 6301.538: 2.3495% ( 67) 00:07:25.311 6301.538 - 6326.745: 2.8550% ( 88) 00:07:25.311 6326.745 - 6351.951: 3.5731% ( 125) 00:07:25.311 6351.951 - 6377.157: 4.2969% ( 126) 00:07:25.311 6377.157 - 6402.363: 4.9230% ( 109) 00:07:25.311 6402.363 - 6427.569: 5.6870% ( 133) 00:07:25.311 6427.569 - 6452.775: 6.8244% ( 198) 00:07:25.311 6452.775 - 6503.188: 8.2663% ( 251) 00:07:25.311 6503.188 - 6553.600: 10.6388% ( 413) 00:07:25.311 6553.600 - 6604.012: 13.6317% ( 521) 00:07:25.311 6604.012 - 6654.425: 18.0147% ( 763) 00:07:25.311 6654.425 - 6704.837: 23.3169% ( 923) 00:07:25.311 6704.837 - 6755.249: 29.7507% ( 1120) 00:07:25.311 6755.249 - 6805.662: 36.1500% ( 1114) 00:07:25.311 6805.662 - 6856.074: 42.9458% ( 1183) 00:07:25.311 6856.074 - 6906.486: 49.1153% ( 1074) 00:07:25.311 6906.486 - 6956.898: 54.0728% ( 863) 00:07:25.311 6956.898 - 7007.311: 59.4669% ( 939) 00:07:25.311 7007.311 - 7057.723: 63.0859% ( 630) 00:07:25.311 7057.723 - 7108.135: 66.1075% ( 526) 00:07:25.311 7108.135 - 7158.548: 68.7328% ( 457) 00:07:25.311 7158.548 - 7208.960: 70.3240% ( 277) 00:07:25.311 7208.960 - 7259.372: 71.3523% ( 179) 00:07:25.311 7259.372 - 7309.785: 72.0416% ( 120) 00:07:25.311 7309.785 - 7360.197: 72.7884% ( 130) 00:07:25.311 7360.197 - 7410.609: 73.4145% ( 109) 00:07:25.311 7410.609 - 7461.022: 74.1556% ( 129) 00:07:25.311 7461.022 - 7511.434: 75.0632% ( 158) 00:07:25.312 7511.434 - 7561.846: 75.6319% ( 99) 00:07:25.312 7561.846 - 7612.258: 76.1374% ( 88) 00:07:25.312 7612.258 - 7662.671: 76.5855% ( 78) 00:07:25.312 7662.671 - 7713.083: 76.9876% ( 70) 00:07:25.312 7713.083 - 7763.495: 77.5161% ( 92) 00:07:25.312 7763.495 - 7813.908: 77.9527% ( 76) 00:07:25.312 7813.908 - 7864.320: 78.4295% ( 83) 00:07:25.312 7864.320 - 7914.732: 79.4233% ( 173) 00:07:25.312 7914.732 - 7965.145: 80.2505% ( 144) 00:07:25.312 7965.145 - 8015.557: 81.1351% ( 154) 00:07:25.312 8015.557 - 8065.969: 82.1864% ( 183) 00:07:25.312 8065.969 - 8116.382: 83.4329% ( 217) 00:07:25.312 8116.382 - 8166.794: 85.2137% ( 310) 00:07:25.312 8166.794 - 8217.206: 86.7819% ( 273) 00:07:25.312 8217.206 - 8267.618: 89.0338% ( 392) 00:07:25.312 8267.618 - 8318.031: 90.3665% ( 232) 00:07:25.312 8318.031 - 8368.443: 91.3948% ( 179) 00:07:25.312 8368.443 - 8418.855: 92.3369% ( 164) 00:07:25.312 8418.855 - 8469.268: 93.2502% ( 159) 00:07:25.312 8469.268 - 8519.680: 93.9568% ( 123) 00:07:25.312 8519.680 - 8570.092: 94.3991% ( 77) 00:07:25.312 8570.092 - 8620.505: 95.0080% ( 106) 00:07:25.312 8620.505 - 8670.917: 95.3642% ( 62) 00:07:25.312 8670.917 - 8721.329: 95.6227% ( 45) 00:07:25.312 8721.329 - 8771.742: 95.9616% ( 59) 00:07:25.312 8771.742 - 8822.154: 96.2891% ( 57) 00:07:25.312 8822.154 - 8872.566: 96.4786% ( 33) 00:07:25.312 8872.566 - 8922.978: 96.6222% ( 25) 00:07:25.312 8922.978 - 8973.391: 96.6969% ( 13) 00:07:25.312 8973.391 - 9023.803: 96.7371% ( 7) 00:07:25.312 9023.803 - 9074.215: 96.7544% ( 3) 00:07:25.312 9074.215 - 9124.628: 96.7888% ( 6) 00:07:25.312 9124.628 - 9175.040: 96.8176% ( 5) 00:07:25.312 9175.040 - 9225.452: 96.8463% ( 5) 00:07:25.312 9225.452 - 9275.865: 96.8807% ( 6) 00:07:25.312 9275.865 - 9326.277: 96.8980% ( 3) 00:07:25.312 9326.277 - 9376.689: 96.9152% ( 3) 00:07:25.312 9376.689 - 9427.102: 96.9382% ( 4) 00:07:25.312 9427.102 - 9477.514: 96.9554% ( 3) 00:07:25.312 9477.514 - 9527.926: 96.9956% ( 7) 00:07:25.312 9527.926 - 9578.338: 97.0588% ( 11) 00:07:25.312 9578.338 - 9628.751: 97.0990% ( 7) 00:07:25.312 9628.751 - 9679.163: 97.1565% ( 10) 00:07:25.312 9679.163 - 9729.575: 97.2943% ( 24) 00:07:25.312 9729.575 - 9779.988: 97.4322% ( 24) 00:07:25.312 9779.988 - 9830.400: 97.4724% ( 7) 00:07:25.312 9830.400 - 9880.812: 97.5126% ( 7) 00:07:25.312 9880.812 - 9931.225: 97.5586% ( 8) 00:07:25.312 9931.225 - 9981.637: 97.5931% ( 6) 00:07:25.312 9981.637 - 10032.049: 97.6275% ( 6) 00:07:25.312 10032.049 - 10082.462: 97.6620% ( 6) 00:07:25.312 10082.462 - 10132.874: 97.7367% ( 13) 00:07:25.312 10132.874 - 10183.286: 97.8458% ( 19) 00:07:25.312 10183.286 - 10233.698: 98.0584% ( 37) 00:07:25.312 10233.698 - 10284.111: 98.1847% ( 22) 00:07:25.312 10284.111 - 10334.523: 98.2652% ( 14) 00:07:25.312 10334.523 - 10384.935: 98.3571% ( 16) 00:07:25.312 10384.935 - 10435.348: 98.3973% ( 7) 00:07:25.312 10435.348 - 10485.760: 98.4260% ( 5) 00:07:25.312 10485.760 - 10536.172: 98.4490% ( 4) 00:07:25.312 10536.172 - 10586.585: 98.4547% ( 1) 00:07:25.312 10586.585 - 10636.997: 98.4662% ( 2) 00:07:25.312 10636.997 - 10687.409: 98.4777% ( 2) 00:07:25.312 10687.409 - 10737.822: 98.4835% ( 1) 00:07:25.312 10737.822 - 10788.234: 98.4949% ( 2) 00:07:25.312 10788.234 - 10838.646: 98.5064% ( 2) 00:07:25.312 10838.646 - 10889.058: 98.5179% ( 2) 00:07:25.312 10889.058 - 10939.471: 98.5237% ( 1) 00:07:25.312 10939.471 - 10989.883: 98.5294% ( 1) 00:07:25.312 11241.945 - 11292.357: 98.5409% ( 2) 00:07:25.312 11292.357 - 11342.769: 98.5524% ( 2) 00:07:25.312 11342.769 - 11393.182: 98.5581% ( 1) 00:07:25.312 11393.182 - 11443.594: 98.5754% ( 3) 00:07:25.312 11443.594 - 11494.006: 98.5811% ( 1) 00:07:25.312 11494.006 - 11544.418: 98.5926% ( 2) 00:07:25.312 11544.418 - 11594.831: 98.6098% ( 3) 00:07:25.312 11594.831 - 11645.243: 98.6271% ( 3) 00:07:25.312 11645.243 - 11695.655: 98.6328% ( 1) 00:07:25.312 11695.655 - 11746.068: 98.6443% ( 2) 00:07:25.312 11746.068 - 11796.480: 98.6500% ( 1) 00:07:25.312 11796.480 - 11846.892: 98.6673% ( 3) 00:07:25.312 11846.892 - 11897.305: 98.6730% ( 1) 00:07:25.312 11897.305 - 11947.717: 98.7017% ( 5) 00:07:25.312 11947.717 - 11998.129: 98.7362% ( 6) 00:07:25.312 11998.129 - 12048.542: 98.7707% ( 6) 00:07:25.312 12048.542 - 12098.954: 98.7994% ( 5) 00:07:25.312 12098.954 - 12149.366: 98.8281% ( 5) 00:07:25.312 12149.366 - 12199.778: 98.8454% ( 3) 00:07:25.312 12199.778 - 12250.191: 98.8683% ( 4) 00:07:25.312 12250.191 - 12300.603: 98.8913% ( 4) 00:07:25.312 12300.603 - 12351.015: 98.8971% ( 1) 00:07:25.312 13006.375 - 13107.200: 98.9200% ( 4) 00:07:25.312 13107.200 - 13208.025: 98.9602% ( 7) 00:07:25.312 13208.025 - 13308.849: 99.0579% ( 17) 00:07:25.312 13308.849 - 13409.674: 99.1096% ( 9) 00:07:25.312 13409.674 - 13510.498: 99.1326% ( 4) 00:07:25.312 13510.498 - 13611.323: 99.1498% ( 3) 00:07:25.312 13611.323 - 13712.148: 99.1670% ( 3) 00:07:25.312 13712.148 - 13812.972: 99.1785% ( 2) 00:07:25.312 13812.972 - 13913.797: 99.1958% ( 3) 00:07:25.312 13913.797 - 14014.622: 99.2130% ( 3) 00:07:25.312 14014.622 - 14115.446: 99.2302% ( 3) 00:07:25.312 14115.446 - 14216.271: 99.2475% ( 3) 00:07:25.312 14216.271 - 14317.095: 99.2647% ( 3) 00:07:25.312 21778.117 - 21878.942: 99.2819% ( 3) 00:07:25.312 21878.942 - 21979.766: 99.3049% ( 4) 00:07:25.312 21979.766 - 22080.591: 99.3279% ( 4) 00:07:25.312 22080.591 - 22181.415: 99.3509% ( 4) 00:07:25.312 22181.415 - 22282.240: 99.3739% ( 4) 00:07:25.312 22282.240 - 22383.065: 99.4026% ( 5) 00:07:25.312 22383.065 - 22483.889: 99.4256% ( 4) 00:07:25.312 22483.889 - 22584.714: 99.4485% ( 4) 00:07:25.312 22584.714 - 22685.538: 99.4715% ( 4) 00:07:25.312 22685.538 - 22786.363: 99.4887% ( 3) 00:07:25.312 22786.363 - 22887.188: 99.5117% ( 4) 00:07:25.312 22887.188 - 22988.012: 99.5347% ( 4) 00:07:25.312 22988.012 - 23088.837: 99.5577% ( 4) 00:07:25.312 23088.837 - 23189.662: 99.5807% ( 4) 00:07:25.312 23189.662 - 23290.486: 99.6036% ( 4) 00:07:25.312 23290.486 - 23391.311: 99.6266% ( 4) 00:07:25.312 23391.311 - 23492.135: 99.6324% ( 1) 00:07:25.312 26416.049 - 26617.698: 99.6496% ( 3) 00:07:25.312 26617.698 - 26819.348: 99.6955% ( 8) 00:07:25.312 26819.348 - 27020.997: 99.7415% ( 8) 00:07:25.312 27020.997 - 27222.646: 99.7875% ( 8) 00:07:25.312 27222.646 - 27424.295: 99.8334% ( 8) 00:07:25.312 27424.295 - 27625.945: 99.8794% ( 8) 00:07:25.312 27625.945 - 27827.594: 99.9253% ( 8) 00:07:25.312 27827.594 - 28029.243: 99.9713% ( 8) 00:07:25.312 28029.243 - 28230.892: 100.0000% ( 5) 00:07:25.312 00:07:25.312 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:25.312 ============================================================================== 00:07:25.312 Range in us Cumulative IO count 00:07:25.312 5847.828 - 5873.034: 0.0057% ( 1) 00:07:25.312 5873.034 - 5898.240: 0.0230% ( 3) 00:07:25.312 5898.240 - 5923.446: 0.0287% ( 1) 00:07:25.312 5923.446 - 5948.652: 0.0460% ( 3) 00:07:25.312 5948.652 - 5973.858: 0.0517% ( 1) 00:07:25.312 5973.858 - 5999.065: 0.0862% ( 6) 00:07:25.312 5999.065 - 6024.271: 0.1149% ( 5) 00:07:25.312 6024.271 - 6049.477: 0.1608% ( 8) 00:07:25.312 6049.477 - 6074.683: 0.2413% ( 14) 00:07:25.312 6074.683 - 6099.889: 0.3562% ( 20) 00:07:25.312 6099.889 - 6125.095: 0.4710% ( 20) 00:07:25.312 6125.095 - 6150.302: 0.5917% ( 21) 00:07:25.312 6150.302 - 6175.508: 0.7468% ( 27) 00:07:25.312 6175.508 - 6200.714: 0.9708% ( 39) 00:07:25.312 6200.714 - 6225.920: 1.2408% ( 47) 00:07:25.312 6225.920 - 6251.126: 1.7233% ( 84) 00:07:25.312 6251.126 - 6276.332: 2.0623% ( 59) 00:07:25.312 6276.332 - 6301.538: 2.4127% ( 61) 00:07:25.312 6301.538 - 6326.745: 2.8665% ( 79) 00:07:25.312 6326.745 - 6351.951: 3.4467% ( 101) 00:07:25.312 6351.951 - 6377.157: 4.1073% ( 115) 00:07:25.312 6377.157 - 6402.363: 4.8254% ( 125) 00:07:25.312 6402.363 - 6427.569: 5.4745% ( 113) 00:07:25.312 6427.569 - 6452.775: 6.1926% ( 125) 00:07:25.312 6452.775 - 6503.188: 8.2031% ( 350) 00:07:25.312 6503.188 - 6553.600: 10.9662% ( 481) 00:07:25.312 6553.600 - 6604.012: 13.7580% ( 486) 00:07:25.312 6604.012 - 6654.425: 17.8539% ( 713) 00:07:25.312 6654.425 - 6704.837: 23.4203% ( 969) 00:07:25.312 6704.837 - 6755.249: 29.4060% ( 1042) 00:07:25.312 6755.249 - 6805.662: 36.0811% ( 1162) 00:07:25.312 6805.662 - 6856.074: 42.5896% ( 1133) 00:07:25.312 6856.074 - 6906.486: 49.3107% ( 1170) 00:07:25.312 6906.486 - 6956.898: 54.4635% ( 897) 00:07:25.312 6956.898 - 7007.311: 58.6972% ( 737) 00:07:25.312 7007.311 - 7057.723: 63.1319% ( 772) 00:07:25.312 7057.723 - 7108.135: 65.7112% ( 449) 00:07:25.312 7108.135 - 7158.548: 68.2445% ( 441) 00:07:25.312 7158.548 - 7208.960: 69.8932% ( 287) 00:07:25.312 7208.960 - 7259.372: 71.5935% ( 296) 00:07:25.312 7259.372 - 7309.785: 72.8056% ( 211) 00:07:25.312 7309.785 - 7360.197: 73.5754% ( 134) 00:07:25.312 7360.197 - 7410.609: 74.3107% ( 128) 00:07:25.312 7410.609 - 7461.022: 74.7702% ( 80) 00:07:25.312 7461.022 - 7511.434: 75.1551% ( 67) 00:07:25.312 7511.434 - 7561.846: 75.5630% ( 71) 00:07:25.312 7561.846 - 7612.258: 76.0857% ( 91) 00:07:25.312 7612.258 - 7662.671: 76.3557% ( 47) 00:07:25.312 7662.671 - 7713.083: 76.7233% ( 64) 00:07:25.312 7713.083 - 7763.495: 77.0221% ( 52) 00:07:25.312 7763.495 - 7813.908: 77.4529% ( 75) 00:07:25.312 7813.908 - 7864.320: 78.0331% ( 101) 00:07:25.312 7864.320 - 7914.732: 78.8718% ( 146) 00:07:25.312 7914.732 - 7965.145: 80.0494% ( 205) 00:07:25.312 7965.145 - 8015.557: 81.3936% ( 234) 00:07:25.312 8015.557 - 8065.969: 82.5885% ( 208) 00:07:25.312 8065.969 - 8116.382: 83.6282% ( 181) 00:07:25.312 8116.382 - 8166.794: 84.6450% ( 177) 00:07:25.313 8166.794 - 8217.206: 86.8107% ( 377) 00:07:25.313 8217.206 - 8267.618: 88.5168% ( 297) 00:07:25.313 8267.618 - 8318.031: 90.0678% ( 270) 00:07:25.313 8318.031 - 8368.443: 91.1650% ( 191) 00:07:25.313 8368.443 - 8418.855: 92.0496% ( 154) 00:07:25.313 8418.855 - 8469.268: 92.6643% ( 107) 00:07:25.313 8469.268 - 8519.680: 93.4743% ( 141) 00:07:25.313 8519.680 - 8570.092: 94.1062% ( 110) 00:07:25.313 8570.092 - 8620.505: 94.4508% ( 60) 00:07:25.313 8620.505 - 8670.917: 94.7840% ( 58) 00:07:25.313 8670.917 - 8721.329: 95.1172% ( 58) 00:07:25.313 8721.329 - 8771.742: 95.5365% ( 73) 00:07:25.313 8771.742 - 8822.154: 95.7261% ( 33) 00:07:25.313 8822.154 - 8872.566: 95.8869% ( 28) 00:07:25.313 8872.566 - 8922.978: 96.0708% ( 32) 00:07:25.313 8922.978 - 8973.391: 96.1914% ( 21) 00:07:25.313 8973.391 - 9023.803: 96.4154% ( 39) 00:07:25.313 9023.803 - 9074.215: 96.4959% ( 14) 00:07:25.313 9074.215 - 9124.628: 96.5476% ( 9) 00:07:25.313 9124.628 - 9175.040: 96.6050% ( 10) 00:07:25.313 9175.040 - 9225.452: 96.6739% ( 12) 00:07:25.313 9225.452 - 9275.865: 96.7486% ( 13) 00:07:25.313 9275.865 - 9326.277: 96.8176% ( 12) 00:07:25.313 9326.277 - 9376.689: 96.9784% ( 28) 00:07:25.313 9376.689 - 9427.102: 97.1622% ( 32) 00:07:25.313 9427.102 - 9477.514: 97.3058% ( 25) 00:07:25.313 9477.514 - 9527.926: 97.4954% ( 33) 00:07:25.313 9527.926 - 9578.338: 97.6275% ( 23) 00:07:25.313 9578.338 - 9628.751: 97.7137% ( 15) 00:07:25.313 9628.751 - 9679.163: 97.7826% ( 12) 00:07:25.313 9679.163 - 9729.575: 97.8401% ( 10) 00:07:25.313 9729.575 - 9779.988: 97.8918% ( 9) 00:07:25.313 9779.988 - 9830.400: 97.9435% ( 9) 00:07:25.313 9830.400 - 9880.812: 97.9722% ( 5) 00:07:25.313 9880.812 - 9931.225: 97.9952% ( 4) 00:07:25.313 9931.225 - 9981.637: 98.0182% ( 4) 00:07:25.313 9981.637 - 10032.049: 98.0354% ( 3) 00:07:25.313 10032.049 - 10082.462: 98.0584% ( 4) 00:07:25.313 10082.462 - 10132.874: 98.0871% ( 5) 00:07:25.313 10132.874 - 10183.286: 98.1330% ( 8) 00:07:25.313 10183.286 - 10233.698: 98.1847% ( 9) 00:07:25.313 10233.698 - 10284.111: 98.2307% ( 8) 00:07:25.313 10284.111 - 10334.523: 98.2709% ( 7) 00:07:25.313 10334.523 - 10384.935: 98.3571% ( 15) 00:07:25.313 10384.935 - 10435.348: 98.4432% ( 15) 00:07:25.313 10435.348 - 10485.760: 98.4662% ( 4) 00:07:25.313 10485.760 - 10536.172: 98.4892% ( 4) 00:07:25.313 10536.172 - 10586.585: 98.5007% ( 2) 00:07:25.313 10586.585 - 10636.997: 98.5179% ( 3) 00:07:25.313 10636.997 - 10687.409: 98.5294% ( 2) 00:07:25.313 11998.129 - 12048.542: 98.5409% ( 2) 00:07:25.313 12048.542 - 12098.954: 98.5524% ( 2) 00:07:25.313 12149.366 - 12199.778: 98.5696% ( 3) 00:07:25.313 12199.778 - 12250.191: 98.5869% ( 3) 00:07:25.313 12250.191 - 12300.603: 98.6041% ( 3) 00:07:25.313 12300.603 - 12351.015: 98.6156% ( 2) 00:07:25.313 12351.015 - 12401.428: 98.6443% ( 5) 00:07:25.313 12401.428 - 12451.840: 98.6788% ( 6) 00:07:25.313 12451.840 - 12502.252: 98.7075% ( 5) 00:07:25.313 12502.252 - 12552.665: 98.7764% ( 12) 00:07:25.313 12552.665 - 12603.077: 98.8281% ( 9) 00:07:25.313 12603.077 - 12653.489: 98.8856% ( 10) 00:07:25.313 12653.489 - 12703.902: 98.9143% ( 5) 00:07:25.313 12703.902 - 12754.314: 98.9315% ( 3) 00:07:25.313 12754.314 - 12804.726: 98.9602% ( 5) 00:07:25.313 12804.726 - 12855.138: 98.9832% ( 4) 00:07:25.313 12855.138 - 12905.551: 99.0062% ( 4) 00:07:25.313 12905.551 - 13006.375: 99.0924% ( 15) 00:07:25.313 13006.375 - 13107.200: 99.1670% ( 13) 00:07:25.313 13107.200 - 13208.025: 99.2188% ( 9) 00:07:25.313 13208.025 - 13308.849: 99.2417% ( 4) 00:07:25.313 13308.849 - 13409.674: 99.2590% ( 3) 00:07:25.313 13409.674 - 13510.498: 99.2647% ( 1) 00:07:25.313 19963.274 - 20064.098: 99.2762% ( 2) 00:07:25.313 20064.098 - 20164.923: 99.2992% ( 4) 00:07:25.313 20164.923 - 20265.748: 99.3222% ( 4) 00:07:25.313 20265.748 - 20366.572: 99.3451% ( 4) 00:07:25.313 20366.572 - 20467.397: 99.3681% ( 4) 00:07:25.313 20467.397 - 20568.222: 99.3968% ( 5) 00:07:25.313 20568.222 - 20669.046: 99.4198% ( 4) 00:07:25.313 20669.046 - 20769.871: 99.4428% ( 4) 00:07:25.313 20769.871 - 20870.695: 99.4658% ( 4) 00:07:25.313 20870.695 - 20971.520: 99.4887% ( 4) 00:07:25.313 20971.520 - 21072.345: 99.5117% ( 4) 00:07:25.313 21072.345 - 21173.169: 99.5347% ( 4) 00:07:25.313 21173.169 - 21273.994: 99.5577% ( 4) 00:07:25.313 21273.994 - 21374.818: 99.5807% ( 4) 00:07:25.313 21374.818 - 21475.643: 99.5979% ( 3) 00:07:25.313 21475.643 - 21576.468: 99.6266% ( 5) 00:07:25.313 21576.468 - 21677.292: 99.6324% ( 1) 00:07:25.313 24702.031 - 24802.855: 99.6381% ( 1) 00:07:25.313 24802.855 - 24903.680: 99.6553% ( 3) 00:07:25.313 24903.680 - 25004.505: 99.6783% ( 4) 00:07:25.313 25004.505 - 25105.329: 99.7013% ( 4) 00:07:25.313 25105.329 - 25206.154: 99.7243% ( 4) 00:07:25.313 25206.154 - 25306.978: 99.7472% ( 4) 00:07:25.313 25306.978 - 25407.803: 99.7702% ( 4) 00:07:25.313 25407.803 - 25508.628: 99.7932% ( 4) 00:07:25.313 25508.628 - 25609.452: 99.8162% ( 4) 00:07:25.313 25609.452 - 25710.277: 99.8449% ( 5) 00:07:25.313 25710.277 - 25811.102: 99.8679% ( 4) 00:07:25.313 25811.102 - 26012.751: 99.9138% ( 8) 00:07:25.313 26012.751 - 26214.400: 99.9598% ( 8) 00:07:25.313 26214.400 - 26416.049: 100.0000% ( 7) 00:07:25.313 00:07:25.313 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:25.313 ============================================================================== 00:07:25.313 Range in us Cumulative IO count 00:07:25.313 5873.034 - 5898.240: 0.0057% ( 1) 00:07:25.313 5948.652 - 5973.858: 0.0172% ( 2) 00:07:25.313 5973.858 - 5999.065: 0.0343% ( 3) 00:07:25.313 5999.065 - 6024.271: 0.0744% ( 7) 00:07:25.313 6024.271 - 6049.477: 0.1603% ( 15) 00:07:25.313 6049.477 - 6074.683: 0.2232% ( 11) 00:07:25.313 6074.683 - 6099.889: 0.3205% ( 17) 00:07:25.313 6099.889 - 6125.095: 0.4522% ( 23) 00:07:25.313 6125.095 - 6150.302: 0.6181% ( 29) 00:07:25.313 6150.302 - 6175.508: 0.9158% ( 52) 00:07:25.313 6175.508 - 6200.714: 1.1905% ( 48) 00:07:25.313 6200.714 - 6225.920: 1.6598% ( 82) 00:07:25.313 6225.920 - 6251.126: 1.8830% ( 39) 00:07:25.313 6251.126 - 6276.332: 2.0490% ( 29) 00:07:25.313 6276.332 - 6301.538: 2.3294% ( 49) 00:07:25.313 6301.538 - 6326.745: 2.8045% ( 83) 00:07:25.313 6326.745 - 6351.951: 3.1364% ( 58) 00:07:25.313 6351.951 - 6377.157: 3.6516% ( 90) 00:07:25.313 6377.157 - 6402.363: 4.1953% ( 95) 00:07:25.313 6402.363 - 6427.569: 4.8134% ( 108) 00:07:25.313 6427.569 - 6452.775: 5.6777% ( 151) 00:07:25.313 6452.775 - 6503.188: 7.7610% ( 364) 00:07:25.313 6503.188 - 6553.600: 10.7944% ( 530) 00:07:25.313 6553.600 - 6604.012: 14.2170% ( 598) 00:07:25.313 6604.012 - 6654.425: 18.2234% ( 700) 00:07:25.313 6654.425 - 6704.837: 23.8610% ( 985) 00:07:25.313 6704.837 - 6755.249: 29.5330% ( 991) 00:07:25.313 6755.249 - 6805.662: 36.2523% ( 1174) 00:07:25.313 6805.662 - 6856.074: 42.8285% ( 1149) 00:07:25.313 6856.074 - 6906.486: 48.0369% ( 910) 00:07:25.313 6906.486 - 6956.898: 53.7145% ( 992) 00:07:25.313 6956.898 - 7007.311: 58.7798% ( 885) 00:07:25.313 7007.311 - 7057.723: 62.7976% ( 702) 00:07:25.313 7057.723 - 7108.135: 65.8883% ( 540) 00:07:25.313 7108.135 - 7158.548: 68.2005% ( 404) 00:07:25.313 7158.548 - 7208.960: 70.3011% ( 367) 00:07:25.313 7208.960 - 7259.372: 71.2740% ( 170) 00:07:25.313 7259.372 - 7309.785: 72.0238% ( 131) 00:07:25.313 7309.785 - 7360.197: 72.9052% ( 154) 00:07:25.313 7360.197 - 7410.609: 73.6607% ( 132) 00:07:25.313 7410.609 - 7461.022: 74.1930% ( 93) 00:07:25.313 7461.022 - 7511.434: 74.5707% ( 66) 00:07:25.313 7511.434 - 7561.846: 75.0916% ( 91) 00:07:25.313 7561.846 - 7612.258: 75.4350% ( 60) 00:07:25.313 7612.258 - 7662.671: 75.7154% ( 49) 00:07:25.313 7662.671 - 7713.083: 76.1962% ( 84) 00:07:25.313 7713.083 - 7763.495: 76.7972% ( 105) 00:07:25.313 7763.495 - 7813.908: 77.4096% ( 107) 00:07:25.313 7813.908 - 7864.320: 78.0048% ( 104) 00:07:25.313 7864.320 - 7914.732: 78.9148% ( 159) 00:07:25.313 7914.732 - 7965.145: 79.5158% ( 105) 00:07:25.313 7965.145 - 8015.557: 80.7921% ( 223) 00:07:25.313 8015.557 - 8065.969: 82.0341% ( 217) 00:07:25.314 8065.969 - 8116.382: 83.2875% ( 219) 00:07:25.314 8116.382 - 8166.794: 84.3006% ( 177) 00:07:25.314 8166.794 - 8217.206: 86.0062% ( 298) 00:07:25.314 8217.206 - 8267.618: 87.9178% ( 334) 00:07:25.314 8267.618 - 8318.031: 89.1140% ( 209) 00:07:25.314 8318.031 - 8368.443: 90.2473% ( 198) 00:07:25.314 8368.443 - 8418.855: 91.4034% ( 202) 00:07:25.314 8418.855 - 8469.268: 92.3764% ( 170) 00:07:25.314 8469.268 - 8519.680: 93.3837% ( 176) 00:07:25.314 8519.680 - 8570.092: 94.2193% ( 146) 00:07:25.314 8570.092 - 8620.505: 94.5627% ( 60) 00:07:25.314 8620.505 - 8670.917: 94.8375% ( 48) 00:07:25.314 8670.917 - 8721.329: 95.1122% ( 48) 00:07:25.314 8721.329 - 8771.742: 95.3984% ( 50) 00:07:25.314 8771.742 - 8822.154: 95.7818% ( 67) 00:07:25.314 8822.154 - 8872.566: 96.2111% ( 75) 00:07:25.314 8872.566 - 8922.978: 96.4515% ( 42) 00:07:25.314 8922.978 - 8973.391: 96.6403% ( 33) 00:07:25.314 8973.391 - 9023.803: 96.8120% ( 30) 00:07:25.314 9023.803 - 9074.215: 96.9895% ( 31) 00:07:25.314 9074.215 - 9124.628: 97.1326% ( 25) 00:07:25.314 9124.628 - 9175.040: 97.2871% ( 27) 00:07:25.314 9175.040 - 9225.452: 97.3558% ( 12) 00:07:25.314 9225.452 - 9275.865: 97.4130% ( 10) 00:07:25.314 9275.865 - 9326.277: 97.4760% ( 11) 00:07:25.314 9326.277 - 9376.689: 97.5275% ( 9) 00:07:25.314 9376.689 - 9427.102: 97.6019% ( 13) 00:07:25.314 9427.102 - 9477.514: 97.6706% ( 12) 00:07:25.314 9477.514 - 9527.926: 97.7507% ( 14) 00:07:25.314 9527.926 - 9578.338: 97.7965% ( 8) 00:07:25.314 9578.338 - 9628.751: 97.8480% ( 9) 00:07:25.314 9628.751 - 9679.163: 97.8594% ( 2) 00:07:25.314 9679.163 - 9729.575: 97.8709% ( 2) 00:07:25.314 9729.575 - 9779.988: 97.8766% ( 1) 00:07:25.314 9779.988 - 9830.400: 97.8938% ( 3) 00:07:25.314 9830.400 - 9880.812: 97.8995% ( 1) 00:07:25.314 9880.812 - 9931.225: 97.9167% ( 3) 00:07:25.314 9931.225 - 9981.637: 97.9224% ( 1) 00:07:25.314 9981.637 - 10032.049: 97.9338% ( 2) 00:07:25.314 10032.049 - 10082.462: 97.9567% ( 4) 00:07:25.314 10082.462 - 10132.874: 98.0025% ( 8) 00:07:25.314 10132.874 - 10183.286: 98.0655% ( 11) 00:07:25.314 10183.286 - 10233.698: 98.1342% ( 12) 00:07:25.314 10233.698 - 10284.111: 98.1914% ( 10) 00:07:25.314 10284.111 - 10334.523: 98.4032% ( 37) 00:07:25.314 10334.523 - 10384.935: 98.4489% ( 8) 00:07:25.314 10384.935 - 10435.348: 98.4890% ( 7) 00:07:25.314 10435.348 - 10485.760: 98.5062% ( 3) 00:07:25.314 10485.760 - 10536.172: 98.5234% ( 3) 00:07:25.314 10536.172 - 10586.585: 98.5348% ( 2) 00:07:25.314 11141.120 - 11191.532: 98.5405% ( 1) 00:07:25.314 11544.418 - 11594.831: 98.5462% ( 1) 00:07:25.314 11594.831 - 11645.243: 98.5691% ( 4) 00:07:25.314 11645.243 - 11695.655: 98.5920% ( 4) 00:07:25.314 11695.655 - 11746.068: 98.6149% ( 4) 00:07:25.314 11746.068 - 11796.480: 98.6264% ( 2) 00:07:25.314 11796.480 - 11846.892: 98.6550% ( 5) 00:07:25.314 11846.892 - 11897.305: 98.7065% ( 9) 00:07:25.314 11897.305 - 11947.717: 98.7466% ( 7) 00:07:25.314 11947.717 - 11998.129: 98.7695% ( 4) 00:07:25.314 11998.129 - 12048.542: 98.7809% ( 2) 00:07:25.314 12048.542 - 12098.954: 98.7924% ( 2) 00:07:25.314 12098.954 - 12149.366: 98.7981% ( 1) 00:07:25.314 12149.366 - 12199.778: 98.8038% ( 1) 00:07:25.314 12199.778 - 12250.191: 98.8152% ( 2) 00:07:25.314 12250.191 - 12300.603: 98.8267% ( 2) 00:07:25.314 12300.603 - 12351.015: 98.8381% ( 2) 00:07:25.314 12351.015 - 12401.428: 98.8439% ( 1) 00:07:25.314 12401.428 - 12451.840: 98.8553% ( 2) 00:07:25.314 12451.840 - 12502.252: 98.8668% ( 2) 00:07:25.314 12502.252 - 12552.665: 98.8725% ( 1) 00:07:25.314 12552.665 - 12603.077: 98.8839% ( 2) 00:07:25.314 12603.077 - 12653.489: 98.8954% ( 2) 00:07:25.314 12653.489 - 12703.902: 98.9011% ( 1) 00:07:25.314 12855.138 - 12905.551: 98.9068% ( 1) 00:07:25.314 12905.551 - 13006.375: 98.9297% ( 4) 00:07:25.314 13006.375 - 13107.200: 98.9526% ( 4) 00:07:25.314 13107.200 - 13208.025: 98.9641% ( 2) 00:07:25.314 13208.025 - 13308.849: 98.9927% ( 5) 00:07:25.314 13308.849 - 13409.674: 99.0156% ( 4) 00:07:25.314 13409.674 - 13510.498: 99.0442% ( 5) 00:07:25.314 13510.498 - 13611.323: 99.0614% ( 3) 00:07:25.314 13611.323 - 13712.148: 99.1186% ( 10) 00:07:25.314 13712.148 - 13812.972: 99.1815% ( 11) 00:07:25.314 13812.972 - 13913.797: 99.2445% ( 11) 00:07:25.314 13913.797 - 14014.622: 99.3132% ( 12) 00:07:25.314 14014.622 - 14115.446: 99.3361% ( 4) 00:07:25.314 14115.446 - 14216.271: 99.3590% ( 4) 00:07:25.314 14216.271 - 14317.095: 99.3819% ( 4) 00:07:25.314 14317.095 - 14417.920: 99.4048% ( 4) 00:07:25.314 14417.920 - 14518.745: 99.4334% ( 5) 00:07:25.314 14518.745 - 14619.569: 99.4563% ( 4) 00:07:25.314 14619.569 - 14720.394: 99.4792% ( 4) 00:07:25.314 14720.394 - 14821.218: 99.5021% ( 4) 00:07:25.314 14821.218 - 14922.043: 99.5250% ( 4) 00:07:25.314 14922.043 - 15022.868: 99.5478% ( 4) 00:07:25.314 15022.868 - 15123.692: 99.5707% ( 4) 00:07:25.314 15123.692 - 15224.517: 99.5994% ( 5) 00:07:25.314 15224.517 - 15325.342: 99.6223% ( 4) 00:07:25.314 15325.342 - 15426.166: 99.6337% ( 2) 00:07:25.314 19459.151 - 19559.975: 99.6451% ( 2) 00:07:25.314 19559.975 - 19660.800: 99.6680% ( 4) 00:07:25.314 19660.800 - 19761.625: 99.6909% ( 4) 00:07:25.314 19761.625 - 19862.449: 99.7138% ( 4) 00:07:25.314 19862.449 - 19963.274: 99.7367% ( 4) 00:07:25.314 19963.274 - 20064.098: 99.7596% ( 4) 00:07:25.314 20064.098 - 20164.923: 99.7825% ( 4) 00:07:25.314 20164.923 - 20265.748: 99.8054% ( 4) 00:07:25.314 20265.748 - 20366.572: 99.8283% ( 4) 00:07:25.314 20366.572 - 20467.397: 99.8569% ( 5) 00:07:25.314 20467.397 - 20568.222: 99.8741% ( 3) 00:07:25.314 20568.222 - 20669.046: 99.9027% ( 5) 00:07:25.314 20669.046 - 20769.871: 99.9256% ( 4) 00:07:25.314 20769.871 - 20870.695: 99.9485% ( 4) 00:07:25.314 20870.695 - 20971.520: 99.9714% ( 4) 00:07:25.314 20971.520 - 21072.345: 99.9943% ( 4) 00:07:25.314 21072.345 - 21173.169: 100.0000% ( 1) 00:07:25.314 00:07:25.314 ************************************ 00:07:25.314 END TEST nvme_perf 00:07:25.314 ************************************ 00:07:25.314 04:27:48 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:25.314 00:07:25.314 real 0m2.489s 00:07:25.314 user 0m2.201s 00:07:25.314 sys 0m0.189s 00:07:25.314 04:27:48 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.314 04:27:48 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.314 04:27:48 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:25.314 04:27:48 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:25.314 04:27:48 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.314 04:27:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.314 ************************************ 00:07:25.314 START TEST nvme_hello_world 00:07:25.314 ************************************ 00:07:25.314 04:27:48 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:25.314 Initializing NVMe Controllers 00:07:25.314 Attached to 0000:00:10.0 00:07:25.314 Namespace ID: 1 size: 6GB 00:07:25.314 Attached to 0000:00:11.0 00:07:25.314 Namespace ID: 1 size: 5GB 00:07:25.314 Attached to 0000:00:13.0 00:07:25.314 Namespace ID: 1 size: 1GB 00:07:25.314 Attached to 0000:00:12.0 00:07:25.314 Namespace ID: 1 size: 4GB 00:07:25.314 Namespace ID: 2 size: 4GB 00:07:25.314 Namespace ID: 3 size: 4GB 00:07:25.314 Initialization complete. 00:07:25.314 INFO: using host memory buffer for IO 00:07:25.314 Hello world! 00:07:25.314 INFO: using host memory buffer for IO 00:07:25.314 Hello world! 00:07:25.314 INFO: using host memory buffer for IO 00:07:25.314 Hello world! 00:07:25.314 INFO: using host memory buffer for IO 00:07:25.314 Hello world! 00:07:25.314 INFO: using host memory buffer for IO 00:07:25.314 Hello world! 00:07:25.314 INFO: using host memory buffer for IO 00:07:25.314 Hello world! 00:07:25.314 ************************************ 00:07:25.314 END TEST nvme_hello_world 00:07:25.314 ************************************ 00:07:25.314 00:07:25.314 real 0m0.193s 00:07:25.314 user 0m0.070s 00:07:25.314 sys 0m0.081s 00:07:25.314 04:27:48 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.314 04:27:48 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:25.314 04:27:48 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:25.314 04:27:48 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.314 04:27:48 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.314 04:27:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.314 ************************************ 00:07:25.314 START TEST nvme_sgl 00:07:25.314 ************************************ 00:07:25.314 04:27:48 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:25.595 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:25.595 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:25.595 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:25.595 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:25.595 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:25.595 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:25.595 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:25.595 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:25.595 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:25.595 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:25.595 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:25.595 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:25.595 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:25.595 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:25.595 NVMe Readv/Writev Request test 00:07:25.595 Attached to 0000:00:10.0 00:07:25.595 Attached to 0000:00:11.0 00:07:25.595 Attached to 0000:00:13.0 00:07:25.595 Attached to 0000:00:12.0 00:07:25.595 0000:00:10.0: build_io_request_2 test passed 00:07:25.595 0000:00:10.0: build_io_request_4 test passed 00:07:25.595 0000:00:10.0: build_io_request_5 test passed 00:07:25.595 0000:00:10.0: build_io_request_6 test passed 00:07:25.595 0000:00:10.0: build_io_request_7 test passed 00:07:25.595 0000:00:10.0: build_io_request_10 test passed 00:07:25.595 0000:00:11.0: build_io_request_2 test passed 00:07:25.595 0000:00:11.0: build_io_request_4 test passed 00:07:25.595 0000:00:11.0: build_io_request_5 test passed 00:07:25.595 0000:00:11.0: build_io_request_6 test passed 00:07:25.595 0000:00:11.0: build_io_request_7 test passed 00:07:25.595 0000:00:11.0: build_io_request_10 test passed 00:07:25.595 Cleaning up... 00:07:25.595 00:07:25.595 real 0m0.263s 00:07:25.595 user 0m0.119s 00:07:25.595 sys 0m0.098s 00:07:25.595 04:27:48 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.595 04:27:48 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:25.595 ************************************ 00:07:25.595 END TEST nvme_sgl 00:07:25.595 ************************************ 00:07:25.595 04:27:48 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:25.595 04:27:48 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.595 04:27:48 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.595 04:27:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.595 ************************************ 00:07:25.595 START TEST nvme_e2edp 00:07:25.595 ************************************ 00:07:25.595 04:27:48 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:25.853 NVMe Write/Read with End-to-End data protection test 00:07:25.853 Attached to 0000:00:10.0 00:07:25.853 Attached to 0000:00:11.0 00:07:25.853 Attached to 0000:00:13.0 00:07:25.853 Attached to 0000:00:12.0 00:07:25.853 Cleaning up... 00:07:25.853 00:07:25.853 real 0m0.205s 00:07:25.853 user 0m0.066s 00:07:25.853 sys 0m0.094s 00:07:25.853 04:27:48 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.853 04:27:48 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:25.853 ************************************ 00:07:25.853 END TEST nvme_e2edp 00:07:25.853 ************************************ 00:07:25.853 04:27:48 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:25.854 04:27:48 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.854 04:27:48 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.854 04:27:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.854 ************************************ 00:07:25.854 START TEST nvme_reserve 00:07:25.854 ************************************ 00:07:25.854 04:27:48 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:26.112 ===================================================== 00:07:26.112 NVMe Controller at PCI bus 0, device 16, function 0 00:07:26.112 ===================================================== 00:07:26.112 Reservations: Not Supported 00:07:26.112 ===================================================== 00:07:26.112 NVMe Controller at PCI bus 0, device 17, function 0 00:07:26.112 ===================================================== 00:07:26.112 Reservations: Not Supported 00:07:26.112 ===================================================== 00:07:26.112 NVMe Controller at PCI bus 0, device 19, function 0 00:07:26.112 ===================================================== 00:07:26.112 Reservations: Not Supported 00:07:26.112 ===================================================== 00:07:26.112 NVMe Controller at PCI bus 0, device 18, function 0 00:07:26.112 ===================================================== 00:07:26.112 Reservations: Not Supported 00:07:26.112 Reservation test passed 00:07:26.112 00:07:26.112 real 0m0.211s 00:07:26.112 user 0m0.069s 00:07:26.112 sys 0m0.100s 00:07:26.112 ************************************ 00:07:26.112 END TEST nvme_reserve 00:07:26.112 ************************************ 00:07:26.112 04:27:49 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.112 04:27:49 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:26.112 04:27:49 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:26.112 04:27:49 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.112 04:27:49 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.112 04:27:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.112 ************************************ 00:07:26.112 START TEST nvme_err_injection 00:07:26.112 ************************************ 00:07:26.112 04:27:49 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:26.369 NVMe Error Injection test 00:07:26.369 Attached to 0000:00:10.0 00:07:26.369 Attached to 0000:00:11.0 00:07:26.369 Attached to 0000:00:13.0 00:07:26.369 Attached to 0000:00:12.0 00:07:26.369 0000:00:11.0: get features failed as expected 00:07:26.369 0000:00:13.0: get features failed as expected 00:07:26.369 0000:00:12.0: get features failed as expected 00:07:26.369 0000:00:10.0: get features failed as expected 00:07:26.369 0000:00:10.0: get features successfully as expected 00:07:26.369 0000:00:11.0: get features successfully as expected 00:07:26.369 0000:00:13.0: get features successfully as expected 00:07:26.369 0000:00:12.0: get features successfully as expected 00:07:26.369 0000:00:10.0: read failed as expected 00:07:26.369 0000:00:11.0: read failed as expected 00:07:26.369 0000:00:13.0: read failed as expected 00:07:26.369 0000:00:12.0: read failed as expected 00:07:26.369 0000:00:10.0: read successfully as expected 00:07:26.369 0000:00:11.0: read successfully as expected 00:07:26.369 0000:00:13.0: read successfully as expected 00:07:26.369 0000:00:12.0: read successfully as expected 00:07:26.369 Cleaning up... 00:07:26.369 00:07:26.369 real 0m0.206s 00:07:26.369 user 0m0.083s 00:07:26.369 sys 0m0.085s 00:07:26.369 04:27:49 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.369 04:27:49 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:26.369 ************************************ 00:07:26.369 END TEST nvme_err_injection 00:07:26.369 ************************************ 00:07:26.369 04:27:49 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:26.369 04:27:49 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:07:26.369 04:27:49 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.369 04:27:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.369 ************************************ 00:07:26.369 START TEST nvme_overhead 00:07:26.369 ************************************ 00:07:26.369 04:27:49 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:27.750 Initializing NVMe Controllers 00:07:27.750 Attached to 0000:00:10.0 00:07:27.750 Attached to 0000:00:11.0 00:07:27.750 Attached to 0000:00:13.0 00:07:27.750 Attached to 0000:00:12.0 00:07:27.750 Initialization complete. Launching workers. 00:07:27.750 submit (in ns) avg, min, max = 11651.0, 9832.3, 1215805.4 00:07:27.750 complete (in ns) avg, min, max = 7750.1, 7223.8, 337423.8 00:07:27.750 00:07:27.750 Submit histogram 00:07:27.750 ================ 00:07:27.750 Range in us Cumulative Count 00:07:27.750 9.797 - 9.846: 0.0057% ( 1) 00:07:27.750 10.240 - 10.289: 0.0114% ( 1) 00:07:27.750 10.289 - 10.338: 0.0172% ( 1) 00:07:27.750 10.634 - 10.683: 0.0229% ( 1) 00:07:27.750 10.732 - 10.782: 0.0286% ( 1) 00:07:27.750 10.782 - 10.831: 0.0343% ( 1) 00:07:27.750 10.831 - 10.880: 0.1259% ( 16) 00:07:27.750 10.880 - 10.929: 0.6582% ( 93) 00:07:27.750 10.929 - 10.978: 2.9876% ( 407) 00:07:27.750 10.978 - 11.028: 9.6039% ( 1156) 00:07:27.750 11.028 - 11.077: 22.5046% ( 2254) 00:07:27.750 11.077 - 11.126: 39.3258% ( 2939) 00:07:27.750 11.126 - 11.175: 54.5616% ( 2662) 00:07:27.750 11.175 - 11.225: 66.4206% ( 2072) 00:07:27.750 11.225 - 11.274: 73.1571% ( 1177) 00:07:27.750 11.274 - 11.323: 76.6999% ( 619) 00:07:27.750 11.323 - 11.372: 78.7431% ( 357) 00:07:27.750 11.372 - 11.422: 79.9050% ( 203) 00:07:27.750 11.422 - 11.471: 80.7807% ( 153) 00:07:27.750 11.471 - 11.520: 81.4274% ( 113) 00:07:27.750 11.520 - 11.569: 82.0341% ( 106) 00:07:27.750 11.569 - 11.618: 82.6637% ( 110) 00:07:27.750 11.618 - 11.668: 83.2589% ( 104) 00:07:27.750 11.668 - 11.717: 83.9686% ( 124) 00:07:27.750 11.717 - 11.766: 84.7012% ( 128) 00:07:27.750 11.766 - 11.815: 85.4052% ( 123) 00:07:27.750 11.815 - 11.865: 85.9947% ( 103) 00:07:27.750 11.865 - 11.914: 86.9219% ( 162) 00:07:27.750 11.914 - 11.963: 87.7747% ( 149) 00:07:27.750 11.963 - 12.012: 88.6676% ( 156) 00:07:27.750 12.012 - 12.062: 89.7150% ( 183) 00:07:27.750 12.062 - 12.111: 90.6822% ( 169) 00:07:27.750 12.111 - 12.160: 91.6037% ( 161) 00:07:27.750 12.160 - 12.209: 92.4050% ( 140) 00:07:27.750 12.209 - 12.258: 93.1490% ( 130) 00:07:27.750 12.258 - 12.308: 93.7214% ( 100) 00:07:27.750 12.308 - 12.357: 94.1277% ( 71) 00:07:27.750 12.357 - 12.406: 94.4425% ( 55) 00:07:27.750 12.406 - 12.455: 94.6886% ( 43) 00:07:27.750 12.455 - 12.505: 94.8489% ( 28) 00:07:27.750 12.505 - 12.554: 95.0206% ( 30) 00:07:27.750 12.554 - 12.603: 95.1122% ( 16) 00:07:27.750 12.603 - 12.702: 95.2438% ( 23) 00:07:27.750 12.702 - 12.800: 95.3125% ( 12) 00:07:27.750 12.800 - 12.898: 95.3239% ( 2) 00:07:27.750 12.898 - 12.997: 95.3583% ( 6) 00:07:27.750 12.997 - 13.095: 95.4327% ( 13) 00:07:27.750 13.095 - 13.194: 95.5185% ( 15) 00:07:27.750 13.194 - 13.292: 95.7246% ( 36) 00:07:27.750 13.292 - 13.391: 95.8448% ( 21) 00:07:27.750 13.391 - 13.489: 95.9077% ( 11) 00:07:27.750 13.489 - 13.588: 95.9650% ( 10) 00:07:27.750 13.588 - 13.686: 96.0108% ( 8) 00:07:27.750 13.686 - 13.785: 96.0680% ( 10) 00:07:27.750 13.785 - 13.883: 96.0909% ( 4) 00:07:27.750 13.883 - 13.982: 96.1310% ( 7) 00:07:27.750 13.982 - 14.080: 96.2054% ( 13) 00:07:27.750 14.080 - 14.178: 96.2569% ( 9) 00:07:27.750 14.178 - 14.277: 96.3141% ( 10) 00:07:27.750 14.277 - 14.375: 96.3427% ( 5) 00:07:27.750 14.375 - 14.474: 96.3656% ( 4) 00:07:27.750 14.474 - 14.572: 96.4171% ( 9) 00:07:27.750 14.572 - 14.671: 96.4343% ( 3) 00:07:27.750 14.671 - 14.769: 96.4915% ( 10) 00:07:27.750 14.769 - 14.868: 96.5659% ( 13) 00:07:27.750 14.868 - 14.966: 96.8807% ( 55) 00:07:27.750 14.966 - 15.065: 97.2527% ( 65) 00:07:27.750 15.065 - 15.163: 97.4989% ( 43) 00:07:27.750 15.163 - 15.262: 97.6534% ( 27) 00:07:27.750 15.262 - 15.360: 97.7106% ( 10) 00:07:27.750 15.360 - 15.458: 97.7335% ( 4) 00:07:27.750 15.458 - 15.557: 97.7793% ( 8) 00:07:27.750 15.557 - 15.655: 97.8022% ( 4) 00:07:27.750 15.655 - 15.754: 97.8308% ( 5) 00:07:27.750 15.754 - 15.852: 97.8423% ( 2) 00:07:27.750 15.852 - 15.951: 97.8766% ( 6) 00:07:27.750 16.049 - 16.148: 97.8995% ( 4) 00:07:27.750 16.148 - 16.246: 97.9281% ( 5) 00:07:27.750 16.246 - 16.345: 97.9453% ( 3) 00:07:27.750 16.345 - 16.443: 97.9625% ( 3) 00:07:27.750 16.443 - 16.542: 97.9796% ( 3) 00:07:27.750 16.542 - 16.640: 98.0140% ( 6) 00:07:27.750 16.640 - 16.738: 98.0655% ( 9) 00:07:27.750 16.738 - 16.837: 98.1571% ( 16) 00:07:27.750 16.837 - 16.935: 98.2315% ( 13) 00:07:27.751 16.935 - 17.034: 98.3402% ( 19) 00:07:27.751 17.034 - 17.132: 98.4089% ( 12) 00:07:27.751 17.132 - 17.231: 98.4432% ( 6) 00:07:27.751 17.231 - 17.329: 98.5176% ( 13) 00:07:27.751 17.329 - 17.428: 98.6321% ( 20) 00:07:27.751 17.428 - 17.526: 98.7523% ( 21) 00:07:27.751 17.526 - 17.625: 98.8267% ( 13) 00:07:27.751 17.625 - 17.723: 98.8839% ( 10) 00:07:27.751 17.723 - 17.822: 98.9183% ( 6) 00:07:27.751 17.822 - 17.920: 98.9755% ( 10) 00:07:27.751 17.920 - 18.018: 99.0213% ( 8) 00:07:27.751 18.018 - 18.117: 99.0442% ( 4) 00:07:27.751 18.117 - 18.215: 99.0728% ( 5) 00:07:27.751 18.215 - 18.314: 99.1186% ( 8) 00:07:27.751 18.314 - 18.412: 99.1529% ( 6) 00:07:27.751 18.412 - 18.511: 99.1815% ( 5) 00:07:27.751 18.511 - 18.609: 99.2102% ( 5) 00:07:27.751 18.609 - 18.708: 99.2159% ( 1) 00:07:27.751 18.708 - 18.806: 99.2388% ( 4) 00:07:27.751 18.806 - 18.905: 99.2560% ( 3) 00:07:27.751 18.905 - 19.003: 99.2617% ( 1) 00:07:27.751 19.003 - 19.102: 99.2731% ( 2) 00:07:27.751 19.102 - 19.200: 99.2903% ( 3) 00:07:27.751 19.200 - 19.298: 99.2960% ( 1) 00:07:27.751 19.298 - 19.397: 99.3017% ( 1) 00:07:27.751 19.397 - 19.495: 99.3075% ( 1) 00:07:27.751 19.495 - 19.594: 99.3132% ( 1) 00:07:27.751 19.692 - 19.791: 99.3189% ( 1) 00:07:27.751 19.791 - 19.889: 99.3246% ( 1) 00:07:27.751 19.988 - 20.086: 99.3304% ( 1) 00:07:27.751 20.086 - 20.185: 99.3361% ( 1) 00:07:27.751 20.185 - 20.283: 99.3418% ( 1) 00:07:27.751 20.480 - 20.578: 99.3475% ( 1) 00:07:27.751 20.578 - 20.677: 99.3647% ( 3) 00:07:27.751 20.677 - 20.775: 99.3704% ( 1) 00:07:27.751 21.071 - 21.169: 99.3761% ( 1) 00:07:27.751 21.366 - 21.465: 99.3819% ( 1) 00:07:27.751 21.465 - 21.563: 99.3876% ( 1) 00:07:27.751 21.563 - 21.662: 99.3990% ( 2) 00:07:27.751 21.662 - 21.760: 99.4048% ( 1) 00:07:27.751 21.858 - 21.957: 99.4105% ( 1) 00:07:27.751 21.957 - 22.055: 99.4162% ( 1) 00:07:27.751 22.055 - 22.154: 99.4219% ( 1) 00:07:27.751 22.252 - 22.351: 99.4277% ( 1) 00:07:27.751 22.351 - 22.449: 99.4334% ( 1) 00:07:27.751 22.449 - 22.548: 99.4391% ( 1) 00:07:27.751 22.745 - 22.843: 99.4448% ( 1) 00:07:27.751 22.843 - 22.942: 99.4505% ( 1) 00:07:27.751 22.942 - 23.040: 99.4620% ( 2) 00:07:27.751 23.040 - 23.138: 99.4677% ( 1) 00:07:27.751 23.237 - 23.335: 99.4734% ( 1) 00:07:27.751 23.729 - 23.828: 99.4792% ( 1) 00:07:27.751 24.615 - 24.714: 99.4849% ( 1) 00:07:27.751 25.108 - 25.206: 99.4906% ( 1) 00:07:27.751 25.600 - 25.797: 99.4963% ( 1) 00:07:27.751 25.797 - 25.994: 99.5078% ( 2) 00:07:27.751 26.388 - 26.585: 99.5135% ( 1) 00:07:27.751 26.782 - 26.978: 99.5192% ( 1) 00:07:27.751 27.175 - 27.372: 99.5250% ( 1) 00:07:27.751 27.569 - 27.766: 99.5307% ( 1) 00:07:27.751 30.917 - 31.114: 99.5936% ( 11) 00:07:27.751 31.114 - 31.311: 99.6680% ( 13) 00:07:27.751 31.311 - 31.508: 99.7539% ( 15) 00:07:27.751 31.508 - 31.705: 99.7940% ( 7) 00:07:27.751 31.705 - 31.902: 99.8397% ( 8) 00:07:27.751 31.902 - 32.098: 99.8855% ( 8) 00:07:27.751 32.098 - 32.295: 99.8913% ( 1) 00:07:27.751 32.295 - 32.492: 99.9084% ( 3) 00:07:27.751 32.689 - 32.886: 99.9141% ( 1) 00:07:27.751 33.280 - 33.477: 99.9256% ( 2) 00:07:27.751 34.265 - 34.462: 99.9313% ( 1) 00:07:27.751 41.748 - 41.945: 99.9370% ( 1) 00:07:27.751 43.126 - 43.323: 99.9428% ( 1) 00:07:27.751 46.277 - 46.474: 99.9485% ( 1) 00:07:27.751 47.458 - 47.655: 99.9542% ( 1) 00:07:27.751 56.320 - 56.714: 99.9599% ( 1) 00:07:27.751 57.895 - 58.289: 99.9657% ( 1) 00:07:27.751 63.803 - 64.197: 99.9714% ( 1) 00:07:27.751 66.560 - 66.954: 99.9771% ( 1) 00:07:27.751 72.862 - 73.255: 99.9828% ( 1) 00:07:27.751 90.978 - 91.372: 99.9886% ( 1) 00:07:27.751 244.185 - 245.760: 99.9943% ( 1) 00:07:27.751 1209.895 - 1216.197: 100.0000% ( 1) 00:07:27.751 00:07:27.751 Complete histogram 00:07:27.751 ================== 00:07:27.751 Range in us Cumulative Count 00:07:27.751 7.188 - 7.237: 0.0114% ( 2) 00:07:27.751 7.237 - 7.286: 0.1889% ( 31) 00:07:27.751 7.286 - 7.335: 2.6442% ( 429) 00:07:27.751 7.335 - 7.385: 14.5318% ( 2077) 00:07:27.751 7.385 - 7.434: 35.8001% ( 3716) 00:07:27.751 7.434 - 7.483: 57.4462% ( 3782) 00:07:27.751 7.483 - 7.532: 71.7663% ( 2502) 00:07:27.751 7.532 - 7.582: 80.7292% ( 1566) 00:07:27.751 7.582 - 7.631: 86.1035% ( 939) 00:07:27.751 7.631 - 7.680: 89.8352% ( 652) 00:07:27.751 7.680 - 7.729: 92.2734% ( 426) 00:07:27.751 7.729 - 7.778: 93.6870% ( 247) 00:07:27.751 7.778 - 7.828: 94.3510% ( 116) 00:07:27.751 7.828 - 7.877: 94.6829% ( 58) 00:07:27.751 7.877 - 7.926: 94.8375% ( 27) 00:07:27.751 7.926 - 7.975: 94.9348% ( 17) 00:07:27.751 7.975 - 8.025: 95.0092% ( 13) 00:07:27.751 8.025 - 8.074: 95.0721% ( 11) 00:07:27.751 8.074 - 8.123: 95.1980% ( 22) 00:07:27.751 8.123 - 8.172: 95.4384% ( 42) 00:07:27.751 8.172 - 8.222: 95.6330% ( 34) 00:07:27.751 8.222 - 8.271: 95.8333% ( 35) 00:07:27.751 8.271 - 8.320: 96.0565% ( 39) 00:07:27.751 8.320 - 8.369: 96.2511% ( 34) 00:07:27.751 8.369 - 8.418: 96.3484% ( 17) 00:07:27.751 8.418 - 8.468: 96.3942% ( 8) 00:07:27.751 8.468 - 8.517: 96.4572% ( 11) 00:07:27.751 8.517 - 8.566: 96.4801% ( 4) 00:07:27.751 8.566 - 8.615: 96.5030% ( 4) 00:07:27.751 8.665 - 8.714: 96.5201% ( 3) 00:07:27.751 8.714 - 8.763: 96.5430% ( 4) 00:07:27.751 8.763 - 8.812: 96.5488% ( 1) 00:07:27.751 8.812 - 8.862: 96.5659% ( 3) 00:07:27.751 8.862 - 8.911: 96.5717% ( 1) 00:07:27.751 9.009 - 9.058: 96.5774% ( 1) 00:07:27.751 9.206 - 9.255: 96.5831% ( 1) 00:07:27.751 9.305 - 9.354: 96.5888% ( 1) 00:07:27.751 9.354 - 9.403: 96.6003% ( 2) 00:07:27.751 9.403 - 9.452: 96.6060% ( 1) 00:07:27.751 9.452 - 9.502: 96.6117% ( 1) 00:07:27.751 9.502 - 9.551: 96.6174% ( 1) 00:07:27.751 9.698 - 9.748: 96.6289% ( 2) 00:07:27.751 9.748 - 9.797: 96.6403% ( 2) 00:07:27.751 9.797 - 9.846: 96.6518% ( 2) 00:07:27.751 9.895 - 9.945: 96.6804% ( 5) 00:07:27.751 9.994 - 10.043: 96.6918% ( 2) 00:07:27.751 10.043 - 10.092: 96.7033% ( 2) 00:07:27.751 10.092 - 10.142: 96.7319% ( 5) 00:07:27.751 10.142 - 10.191: 96.7663% ( 6) 00:07:27.751 10.191 - 10.240: 96.7949% ( 5) 00:07:27.751 10.240 - 10.289: 96.9666% ( 30) 00:07:27.751 10.289 - 10.338: 97.2470% ( 49) 00:07:27.751 10.338 - 10.388: 97.4588% ( 37) 00:07:27.751 10.388 - 10.437: 97.6019% ( 25) 00:07:27.751 10.437 - 10.486: 97.7049% ( 18) 00:07:27.751 10.486 - 10.535: 97.7736% ( 12) 00:07:27.751 10.535 - 10.585: 97.8194% ( 8) 00:07:27.751 10.585 - 10.634: 97.8480% ( 5) 00:07:27.751 10.634 - 10.683: 97.8766% ( 5) 00:07:27.751 10.683 - 10.732: 97.8823% ( 1) 00:07:27.751 10.732 - 10.782: 97.9109% ( 5) 00:07:27.751 10.782 - 10.831: 97.9281% ( 3) 00:07:27.751 10.831 - 10.880: 97.9396% ( 2) 00:07:27.751 10.929 - 10.978: 97.9510% ( 2) 00:07:27.751 10.978 - 11.028: 97.9567% ( 1) 00:07:27.751 11.077 - 11.126: 97.9625% ( 1) 00:07:27.751 11.175 - 11.225: 97.9682% ( 1) 00:07:27.751 11.225 - 11.274: 97.9739% ( 1) 00:07:27.751 11.323 - 11.372: 97.9796% ( 1) 00:07:27.751 11.520 - 11.569: 97.9968% ( 3) 00:07:27.751 11.569 - 11.618: 98.0082% ( 2) 00:07:27.751 11.668 - 11.717: 98.0140% ( 1) 00:07:27.751 11.766 - 11.815: 98.0197% ( 1) 00:07:27.751 11.865 - 11.914: 98.0254% ( 1) 00:07:27.751 11.914 - 11.963: 98.0311% ( 1) 00:07:27.751 12.012 - 12.062: 98.0369% ( 1) 00:07:27.751 12.062 - 12.111: 98.0483% ( 2) 00:07:27.751 12.209 - 12.258: 98.0540% ( 1) 00:07:27.751 12.406 - 12.455: 98.0598% ( 1) 00:07:27.751 12.603 - 12.702: 98.0655% ( 1) 00:07:27.751 12.800 - 12.898: 98.0884% ( 4) 00:07:27.751 12.898 - 12.997: 98.1113% ( 4) 00:07:27.751 12.997 - 13.095: 98.1571% ( 8) 00:07:27.751 13.095 - 13.194: 98.2257% ( 12) 00:07:27.751 13.194 - 13.292: 98.2887% ( 11) 00:07:27.751 13.292 - 13.391: 98.3631% ( 13) 00:07:27.751 13.391 - 13.489: 98.4489% ( 15) 00:07:27.751 13.489 - 13.588: 98.5520% ( 18) 00:07:27.751 13.588 - 13.686: 98.6264% ( 13) 00:07:27.751 13.686 - 13.785: 98.7523% ( 22) 00:07:27.751 13.785 - 13.883: 98.8152% ( 11) 00:07:27.751 13.883 - 13.982: 98.8839% ( 12) 00:07:27.751 13.982 - 14.080: 98.9469% ( 11) 00:07:27.751 14.080 - 14.178: 98.9812% ( 6) 00:07:27.751 14.178 - 14.277: 99.0213% ( 7) 00:07:27.751 14.277 - 14.375: 99.0785% ( 10) 00:07:27.751 14.375 - 14.474: 99.1415% ( 11) 00:07:27.751 14.474 - 14.572: 99.1758% ( 6) 00:07:27.751 14.572 - 14.671: 99.1930% ( 3) 00:07:27.751 14.671 - 14.769: 99.2159% ( 4) 00:07:27.751 14.769 - 14.868: 99.2388% ( 4) 00:07:27.751 14.868 - 14.966: 99.2617% ( 4) 00:07:27.751 15.065 - 15.163: 99.2731% ( 2) 00:07:27.751 15.163 - 15.262: 99.2788% ( 1) 00:07:27.751 15.262 - 15.360: 99.2846% ( 1) 00:07:27.751 15.360 - 15.458: 99.3017% ( 3) 00:07:27.751 15.458 - 15.557: 99.3132% ( 2) 00:07:27.751 15.754 - 15.852: 99.3189% ( 1) 00:07:27.751 15.852 - 15.951: 99.3246% ( 1) 00:07:27.751 16.246 - 16.345: 99.3304% ( 1) 00:07:27.751 16.443 - 16.542: 99.3533% ( 4) 00:07:27.751 16.542 - 16.640: 99.3647% ( 2) 00:07:27.751 16.640 - 16.738: 99.3704% ( 1) 00:07:27.751 16.837 - 16.935: 99.3819% ( 2) 00:07:27.751 17.034 - 17.132: 99.3876% ( 1) 00:07:27.751 17.231 - 17.329: 99.3933% ( 1) 00:07:27.751 17.723 - 17.822: 99.4048% ( 2) 00:07:27.751 17.920 - 18.018: 99.4105% ( 1) 00:07:27.751 18.314 - 18.412: 99.4162% ( 1) 00:07:27.751 18.412 - 18.511: 99.4334% ( 3) 00:07:27.751 18.905 - 19.003: 99.4391% ( 1) 00:07:27.751 19.003 - 19.102: 99.4448% ( 1) 00:07:27.751 19.102 - 19.200: 99.4505% ( 1) 00:07:27.751 19.200 - 19.298: 99.4563% ( 1) 00:07:27.751 19.298 - 19.397: 99.4620% ( 1) 00:07:27.751 19.397 - 19.495: 99.4677% ( 1) 00:07:27.751 19.791 - 19.889: 99.4734% ( 1) 00:07:27.751 19.988 - 20.086: 99.4792% ( 1) 00:07:27.751 20.677 - 20.775: 99.4906% ( 2) 00:07:27.751 20.874 - 20.972: 99.4963% ( 1) 00:07:27.752 20.972 - 21.071: 99.5021% ( 1) 00:07:27.752 21.071 - 21.169: 99.5078% ( 1) 00:07:27.752 21.366 - 21.465: 99.5135% ( 1) 00:07:27.752 21.563 - 21.662: 99.5192% ( 1) 00:07:27.752 21.957 - 22.055: 99.5250% ( 1) 00:07:27.752 22.055 - 22.154: 99.5593% ( 6) 00:07:27.752 22.154 - 22.252: 99.5879% ( 5) 00:07:27.752 22.252 - 22.351: 99.6680% ( 14) 00:07:27.752 22.351 - 22.449: 99.7367% ( 12) 00:07:27.752 22.449 - 22.548: 99.7940% ( 10) 00:07:27.752 22.548 - 22.646: 99.8455% ( 9) 00:07:27.752 22.646 - 22.745: 99.8684% ( 4) 00:07:27.752 22.745 - 22.843: 99.8855% ( 3) 00:07:27.752 22.942 - 23.040: 99.8913% ( 1) 00:07:27.752 23.040 - 23.138: 99.8970% ( 1) 00:07:27.752 23.237 - 23.335: 99.9027% ( 1) 00:07:27.752 26.388 - 26.585: 99.9084% ( 1) 00:07:27.752 30.129 - 30.326: 99.9141% ( 1) 00:07:27.752 32.689 - 32.886: 99.9199% ( 1) 00:07:27.752 36.628 - 36.825: 99.9256% ( 1) 00:07:27.752 38.006 - 38.203: 99.9313% ( 1) 00:07:27.752 38.597 - 38.794: 99.9370% ( 1) 00:07:27.752 40.369 - 40.566: 99.9428% ( 1) 00:07:27.752 41.945 - 42.142: 99.9485% ( 1) 00:07:27.752 42.338 - 42.535: 99.9542% ( 1) 00:07:27.752 42.929 - 43.126: 99.9599% ( 1) 00:07:27.752 46.277 - 46.474: 99.9657% ( 1) 00:07:27.752 54.745 - 55.138: 99.9714% ( 1) 00:07:27.752 57.108 - 57.502: 99.9771% ( 1) 00:07:27.752 57.895 - 58.289: 99.9828% ( 1) 00:07:27.752 68.135 - 68.529: 99.9886% ( 1) 00:07:27.752 92.554 - 92.948: 99.9943% ( 1) 00:07:27.752 337.132 - 338.708: 100.0000% ( 1) 00:07:27.752 00:07:27.752 ************************************ 00:07:27.752 END TEST nvme_overhead 00:07:27.752 ************************************ 00:07:27.752 00:07:27.752 real 0m1.231s 00:07:27.752 user 0m1.070s 00:07:27.752 sys 0m0.107s 00:07:27.752 04:27:50 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.752 04:27:50 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:27.752 04:27:50 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:27.752 04:27:50 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:27.752 04:27:50 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.752 04:27:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.752 ************************************ 00:07:27.752 START TEST nvme_arbitration 00:07:27.752 ************************************ 00:07:27.752 04:27:50 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:31.034 Initializing NVMe Controllers 00:07:31.034 Attached to 0000:00:10.0 00:07:31.034 Attached to 0000:00:11.0 00:07:31.034 Attached to 0000:00:13.0 00:07:31.034 Attached to 0000:00:12.0 00:07:31.034 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:31.034 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:31.034 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:31.034 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:31.034 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:31.034 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:31.034 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:31.034 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:31.034 Initialization complete. Launching workers. 00:07:31.034 Starting thread on core 1 with urgent priority queue 00:07:31.034 Starting thread on core 2 with urgent priority queue 00:07:31.034 Starting thread on core 3 with urgent priority queue 00:07:31.034 Starting thread on core 0 with urgent priority queue 00:07:31.034 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.034 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.034 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:31.034 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:31.034 QEMU NVMe Ctrl (12343 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.034 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:31.034 ======================================================== 00:07:31.034 00:07:31.034 00:07:31.034 real 0m3.314s 00:07:31.034 user 0m9.292s 00:07:31.034 sys 0m0.113s 00:07:31.034 ************************************ 00:07:31.034 END TEST nvme_arbitration 00:07:31.034 ************************************ 00:07:31.034 04:27:53 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.034 04:27:53 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:31.034 04:27:53 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:31.034 04:27:53 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:31.034 04:27:53 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.034 04:27:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.034 ************************************ 00:07:31.034 START TEST nvme_single_aen 00:07:31.034 ************************************ 00:07:31.034 04:27:53 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:31.293 Asynchronous Event Request test 00:07:31.293 Attached to 0000:00:10.0 00:07:31.293 Attached to 0000:00:11.0 00:07:31.293 Attached to 0000:00:13.0 00:07:31.293 Attached to 0000:00:12.0 00:07:31.293 Reset controller to setup AER completions for this process 00:07:31.293 Registering asynchronous event callbacks... 00:07:31.293 Getting orig temperature thresholds of all controllers 00:07:31.293 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.293 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.293 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.293 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.293 Setting all controllers temperature threshold low to trigger AER 00:07:31.293 Waiting for all controllers temperature threshold to be set lower 00:07:31.293 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.293 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:31.293 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.293 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:31.293 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.293 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:31.293 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.293 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:31.293 Waiting for all controllers to trigger AER and reset threshold 00:07:31.293 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.293 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.293 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.293 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.293 Cleaning up... 00:07:31.293 00:07:31.293 real 0m0.218s 00:07:31.293 user 0m0.081s 00:07:31.293 sys 0m0.093s 00:07:31.293 ************************************ 00:07:31.293 END TEST nvme_single_aen 00:07:31.293 ************************************ 00:07:31.293 04:27:54 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.293 04:27:54 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:31.293 04:27:54 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:31.293 04:27:54 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.293 04:27:54 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.293 04:27:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.293 ************************************ 00:07:31.293 START TEST nvme_doorbell_aers 00:07:31.293 ************************************ 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:31.293 04:27:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:31.551 [2024-11-03 04:27:54.509820] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:07:41.526 Executing: test_write_invalid_db 00:07:41.526 Waiting for AER completion... 00:07:41.526 Failure: test_write_invalid_db 00:07:41.526 00:07:41.526 Executing: test_invalid_db_write_overflow_sq 00:07:41.526 Waiting for AER completion... 00:07:41.526 Failure: test_invalid_db_write_overflow_sq 00:07:41.526 00:07:41.526 Executing: test_invalid_db_write_overflow_cq 00:07:41.526 Waiting for AER completion... 00:07:41.526 Failure: test_invalid_db_write_overflow_cq 00:07:41.526 00:07:41.526 04:28:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.526 04:28:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:41.526 [2024-11-03 04:28:04.544008] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:07:51.493 Executing: test_write_invalid_db 00:07:51.493 Waiting for AER completion... 00:07:51.493 Failure: test_write_invalid_db 00:07:51.493 00:07:51.493 Executing: test_invalid_db_write_overflow_sq 00:07:51.493 Waiting for AER completion... 00:07:51.493 Failure: test_invalid_db_write_overflow_sq 00:07:51.493 00:07:51.493 Executing: test_invalid_db_write_overflow_cq 00:07:51.493 Waiting for AER completion... 00:07:51.493 Failure: test_invalid_db_write_overflow_cq 00:07:51.493 00:07:51.493 04:28:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:51.493 04:28:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:51.493 [2024-11-03 04:28:14.572918] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:01.458 Executing: test_write_invalid_db 00:08:01.458 Waiting for AER completion... 00:08:01.458 Failure: test_write_invalid_db 00:08:01.458 00:08:01.458 Executing: test_invalid_db_write_overflow_sq 00:08:01.458 Waiting for AER completion... 00:08:01.458 Failure: test_invalid_db_write_overflow_sq 00:08:01.458 00:08:01.458 Executing: test_invalid_db_write_overflow_cq 00:08:01.458 Waiting for AER completion... 00:08:01.458 Failure: test_invalid_db_write_overflow_cq 00:08:01.458 00:08:01.458 04:28:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.458 04:28:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:01.716 [2024-11-03 04:28:24.622478] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.707 Executing: test_write_invalid_db 00:08:11.707 Waiting for AER completion... 00:08:11.707 Failure: test_write_invalid_db 00:08:11.707 00:08:11.707 Executing: test_invalid_db_write_overflow_sq 00:08:11.707 Waiting for AER completion... 00:08:11.707 Failure: test_invalid_db_write_overflow_sq 00:08:11.707 00:08:11.707 Executing: test_invalid_db_write_overflow_cq 00:08:11.707 Waiting for AER completion... 00:08:11.707 Failure: test_invalid_db_write_overflow_cq 00:08:11.707 00:08:11.707 00:08:11.707 real 0m40.187s 00:08:11.707 user 0m34.205s 00:08:11.707 sys 0m5.598s 00:08:11.707 04:28:34 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.707 04:28:34 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:11.707 ************************************ 00:08:11.707 END TEST nvme_doorbell_aers 00:08:11.707 ************************************ 00:08:11.707 04:28:34 nvme -- nvme/nvme.sh@97 -- # uname 00:08:11.707 04:28:34 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:11.707 04:28:34 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:11.707 04:28:34 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:11.707 04:28:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.707 04:28:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.707 ************************************ 00:08:11.707 START TEST nvme_multi_aen 00:08:11.708 ************************************ 00:08:11.708 04:28:34 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:11.708 [2024-11-03 04:28:34.666233] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.666293] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.666303] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.667648] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.667676] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.667685] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.668858] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.668923] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.669007] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.669980] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.670080] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 [2024-11-03 04:28:34.670136] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63205) is not found. Dropping the request. 00:08:11.708 Child process pid: 63725 00:08:11.966 [Child] Asynchronous Event Request test 00:08:11.966 [Child] Attached to 0000:00:10.0 00:08:11.966 [Child] Attached to 0000:00:11.0 00:08:11.966 [Child] Attached to 0000:00:13.0 00:08:11.966 [Child] Attached to 0000:00:12.0 00:08:11.966 [Child] Registering asynchronous event callbacks... 00:08:11.966 [Child] Getting orig temperature thresholds of all controllers 00:08:11.966 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.966 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.966 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.966 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.966 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:11.966 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.966 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.966 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.966 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.966 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.966 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.966 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.966 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.966 [Child] Cleaning up... 00:08:11.966 Asynchronous Event Request test 00:08:11.967 Attached to 0000:00:10.0 00:08:11.967 Attached to 0000:00:11.0 00:08:11.967 Attached to 0000:00:13.0 00:08:11.967 Attached to 0000:00:12.0 00:08:11.967 Reset controller to setup AER completions for this process 00:08:11.967 Registering asynchronous event callbacks... 00:08:11.967 Getting orig temperature thresholds of all controllers 00:08:11.967 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.967 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.967 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.967 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.967 Setting all controllers temperature threshold low to trigger AER 00:08:11.967 Waiting for all controllers temperature threshold to be set lower 00:08:11.967 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.967 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:11.967 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.967 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:11.967 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.967 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:11.967 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.967 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:11.967 Waiting for all controllers to trigger AER and reset threshold 00:08:11.967 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.967 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.967 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.967 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.967 Cleaning up... 00:08:11.967 00:08:11.967 real 0m0.438s 00:08:11.967 user 0m0.132s 00:08:11.967 sys 0m0.188s 00:08:11.967 04:28:34 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.967 04:28:34 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:11.967 ************************************ 00:08:11.967 END TEST nvme_multi_aen 00:08:11.967 ************************************ 00:08:11.967 04:28:34 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:11.967 04:28:34 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:11.967 04:28:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.967 04:28:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.967 ************************************ 00:08:11.967 START TEST nvme_startup 00:08:11.967 ************************************ 00:08:11.967 04:28:34 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:12.225 Initializing NVMe Controllers 00:08:12.225 Attached to 0000:00:10.0 00:08:12.225 Attached to 0000:00:11.0 00:08:12.225 Attached to 0000:00:13.0 00:08:12.225 Attached to 0000:00:12.0 00:08:12.225 Initialization complete. 00:08:12.225 Time used:132619.141 (us). 00:08:12.225 00:08:12.225 real 0m0.189s 00:08:12.225 user 0m0.075s 00:08:12.225 sys 0m0.073s 00:08:12.225 04:28:35 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.225 ************************************ 00:08:12.225 END TEST nvme_startup 00:08:12.225 ************************************ 00:08:12.225 04:28:35 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:12.225 04:28:35 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:12.225 04:28:35 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:12.225 04:28:35 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.225 04:28:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.225 ************************************ 00:08:12.225 START TEST nvme_multi_secondary 00:08:12.225 ************************************ 00:08:12.225 04:28:35 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:08:12.225 04:28:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63776 00:08:12.225 04:28:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:12.225 04:28:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63777 00:08:12.225 04:28:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:12.225 04:28:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:15.505 Initializing NVMe Controllers 00:08:15.505 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.505 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.505 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.505 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.505 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:15.505 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:15.505 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:15.505 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:15.505 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:15.505 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:15.505 Initialization complete. Launching workers. 00:08:15.505 ======================================================== 00:08:15.505 Latency(us) 00:08:15.505 Device Information : IOPS MiB/s Average min max 00:08:15.505 PCIE (0000:00:10.0) NSID 1 from core 1: 7897.87 30.85 2024.49 691.64 6127.16 00:08:15.505 PCIE (0000:00:11.0) NSID 1 from core 1: 7897.87 30.85 2025.44 719.78 5966.59 00:08:15.505 PCIE (0000:00:13.0) NSID 1 from core 1: 7897.87 30.85 2025.40 714.14 5899.52 00:08:15.505 PCIE (0000:00:12.0) NSID 1 from core 1: 7897.87 30.85 2025.36 706.57 5624.59 00:08:15.505 PCIE (0000:00:12.0) NSID 2 from core 1: 7897.87 30.85 2025.34 698.67 6023.93 00:08:15.505 PCIE (0000:00:12.0) NSID 3 from core 1: 7897.87 30.85 2025.32 719.55 5902.63 00:08:15.505 ======================================================== 00:08:15.506 Total : 47387.21 185.11 2025.22 691.64 6127.16 00:08:15.506 00:08:15.506 Initializing NVMe Controllers 00:08:15.506 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.506 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.506 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.506 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.506 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:15.506 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:15.506 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:15.506 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:15.506 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:15.506 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:15.506 Initialization complete. Launching workers. 00:08:15.506 ======================================================== 00:08:15.506 Latency(us) 00:08:15.506 Device Information : IOPS MiB/s Average min max 00:08:15.506 PCIE (0000:00:10.0) NSID 1 from core 2: 3263.13 12.75 4901.59 1108.62 12609.04 00:08:15.506 PCIE (0000:00:11.0) NSID 1 from core 2: 3263.13 12.75 4902.95 1207.99 12549.59 00:08:15.506 PCIE (0000:00:13.0) NSID 1 from core 2: 3263.13 12.75 4902.90 1088.94 12887.41 00:08:15.506 PCIE (0000:00:12.0) NSID 1 from core 2: 3263.13 12.75 4903.03 1193.67 12758.55 00:08:15.506 PCIE (0000:00:12.0) NSID 2 from core 2: 3263.13 12.75 4902.97 1086.87 13516.68 00:08:15.506 PCIE (0000:00:12.0) NSID 3 from core 2: 3263.13 12.75 4902.95 988.24 13255.49 00:08:15.506 ======================================================== 00:08:15.506 Total : 19578.77 76.48 4902.73 988.24 13516.68 00:08:15.506 00:08:15.506 04:28:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63776 00:08:18.045 Initializing NVMe Controllers 00:08:18.045 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.045 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.045 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.045 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.045 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:18.045 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:18.045 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:18.045 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:18.045 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:18.045 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:18.045 Initialization complete. Launching workers. 00:08:18.045 ======================================================== 00:08:18.045 Latency(us) 00:08:18.045 Device Information : IOPS MiB/s Average min max 00:08:18.045 PCIE (0000:00:10.0) NSID 1 from core 0: 11220.41 43.83 1424.75 677.99 6689.03 00:08:18.045 PCIE (0000:00:11.0) NSID 1 from core 0: 11220.41 43.83 1425.57 677.40 5791.92 00:08:18.045 PCIE (0000:00:13.0) NSID 1 from core 0: 11220.41 43.83 1425.55 655.70 5792.89 00:08:18.045 PCIE (0000:00:12.0) NSID 1 from core 0: 11220.41 43.83 1425.52 625.44 7251.44 00:08:18.045 PCIE (0000:00:12.0) NSID 2 from core 0: 11220.41 43.83 1425.50 598.38 7164.78 00:08:18.045 PCIE (0000:00:12.0) NSID 3 from core 0: 11220.41 43.83 1425.48 589.95 7084.29 00:08:18.045 ======================================================== 00:08:18.045 Total : 67322.46 262.98 1425.40 589.95 7251.44 00:08:18.045 00:08:18.045 04:28:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63777 00:08:18.045 04:28:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63847 00:08:18.045 04:28:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:18.045 04:28:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63848 00:08:18.045 04:28:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:18.045 04:28:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:21.327 Initializing NVMe Controllers 00:08:21.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:21.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:21.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:21.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:21.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:21.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:21.327 Initialization complete. Launching workers. 00:08:21.327 ======================================================== 00:08:21.327 Latency(us) 00:08:21.327 Device Information : IOPS MiB/s Average min max 00:08:21.327 PCIE (0000:00:10.0) NSID 1 from core 0: 7742.18 30.24 2065.22 707.40 5786.76 00:08:21.327 PCIE (0000:00:11.0) NSID 1 from core 0: 7742.18 30.24 2066.27 731.80 5666.76 00:08:21.327 PCIE (0000:00:13.0) NSID 1 from core 0: 7742.18 30.24 2066.31 736.28 6172.10 00:08:21.327 PCIE (0000:00:12.0) NSID 1 from core 0: 7742.18 30.24 2066.28 735.06 5960.83 00:08:21.327 PCIE (0000:00:12.0) NSID 2 from core 0: 7742.18 30.24 2066.32 726.14 5564.42 00:08:21.327 PCIE (0000:00:12.0) NSID 3 from core 0: 7742.18 30.24 2066.30 730.08 6267.23 00:08:21.327 ======================================================== 00:08:21.327 Total : 46453.10 181.46 2066.12 707.40 6267.23 00:08:21.327 00:08:21.327 Initializing NVMe Controllers 00:08:21.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:21.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:21.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:21.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:21.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:21.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:21.327 Initialization complete. Launching workers. 00:08:21.327 ======================================================== 00:08:21.327 Latency(us) 00:08:21.327 Device Information : IOPS MiB/s Average min max 00:08:21.327 PCIE (0000:00:10.0) NSID 1 from core 1: 7722.94 30.17 2070.40 743.75 5611.57 00:08:21.327 PCIE (0000:00:11.0) NSID 1 from core 1: 7722.94 30.17 2071.39 756.84 5917.25 00:08:21.327 PCIE (0000:00:13.0) NSID 1 from core 1: 7722.94 30.17 2071.35 750.06 5879.05 00:08:21.327 PCIE (0000:00:12.0) NSID 1 from core 1: 7722.94 30.17 2071.32 728.51 5745.46 00:08:21.327 PCIE (0000:00:12.0) NSID 2 from core 1: 7722.94 30.17 2071.28 748.79 5654.94 00:08:21.327 PCIE (0000:00:12.0) NSID 3 from core 1: 7722.94 30.17 2071.25 754.97 5806.97 00:08:21.327 ======================================================== 00:08:21.327 Total : 46337.62 181.01 2071.16 728.51 5917.25 00:08:21.327 00:08:23.229 Initializing NVMe Controllers 00:08:23.229 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.229 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.229 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.229 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.229 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:23.229 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:23.229 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:23.229 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:23.229 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:23.229 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:23.229 Initialization complete. Launching workers. 00:08:23.229 ======================================================== 00:08:23.229 Latency(us) 00:08:23.229 Device Information : IOPS MiB/s Average min max 00:08:23.229 PCIE (0000:00:10.0) NSID 1 from core 2: 4895.36 19.12 3266.81 717.52 15476.25 00:08:23.229 PCIE (0000:00:11.0) NSID 1 from core 2: 4895.36 19.12 3268.01 687.14 12543.02 00:08:23.229 PCIE (0000:00:13.0) NSID 1 from core 2: 4895.36 19.12 3267.63 732.56 12260.49 00:08:23.229 PCIE (0000:00:12.0) NSID 1 from core 2: 4895.36 19.12 3267.72 735.64 12285.69 00:08:23.229 PCIE (0000:00:12.0) NSID 2 from core 2: 4895.36 19.12 3267.50 692.49 12280.56 00:08:23.229 PCIE (0000:00:12.0) NSID 3 from core 2: 4895.36 19.12 3267.78 633.42 12627.54 00:08:23.229 ======================================================== 00:08:23.229 Total : 29372.16 114.73 3267.58 633.42 15476.25 00:08:23.229 00:08:23.229 ************************************ 00:08:23.229 END TEST nvme_multi_secondary 00:08:23.230 ************************************ 00:08:23.230 04:28:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63847 00:08:23.230 04:28:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63848 00:08:23.230 00:08:23.230 real 0m10.900s 00:08:23.230 user 0m18.432s 00:08:23.230 sys 0m0.594s 00:08:23.230 04:28:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:23.230 04:28:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:23.230 04:28:46 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:23.230 04:28:46 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62814 ]] 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1092 -- # kill 62814 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1093 -- # wait 62814 00:08:23.230 [2024-11-03 04:28:46.115951] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.116017] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.116041] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.116056] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.118107] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.118154] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.118168] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.118183] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.120254] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.120296] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.120309] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.120323] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.122327] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.122371] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.122385] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.122400] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63724) is not found. Dropping the request. 00:08:23.230 [2024-11-03 04:28:46.233407] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:08:23.230 04:28:46 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:23.230 04:28:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.230 ************************************ 00:08:23.230 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:23.230 ************************************ 00:08:23.230 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:23.489 * Looking for test storage... 00:08:23.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.489 --rc genhtml_branch_coverage=1 00:08:23.489 --rc genhtml_function_coverage=1 00:08:23.489 --rc genhtml_legend=1 00:08:23.489 --rc geninfo_all_blocks=1 00:08:23.489 --rc geninfo_unexecuted_blocks=1 00:08:23.489 00:08:23.489 ' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.489 --rc genhtml_branch_coverage=1 00:08:23.489 --rc genhtml_function_coverage=1 00:08:23.489 --rc genhtml_legend=1 00:08:23.489 --rc geninfo_all_blocks=1 00:08:23.489 --rc geninfo_unexecuted_blocks=1 00:08:23.489 00:08:23.489 ' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.489 --rc genhtml_branch_coverage=1 00:08:23.489 --rc genhtml_function_coverage=1 00:08:23.489 --rc genhtml_legend=1 00:08:23.489 --rc geninfo_all_blocks=1 00:08:23.489 --rc geninfo_unexecuted_blocks=1 00:08:23.489 00:08:23.489 ' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:23.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.489 --rc genhtml_branch_coverage=1 00:08:23.489 --rc genhtml_function_coverage=1 00:08:23.489 --rc genhtml_legend=1 00:08:23.489 --rc geninfo_all_blocks=1 00:08:23.489 --rc geninfo_unexecuted_blocks=1 00:08:23.489 00:08:23.489 ' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64014 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64014 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 64014 ']' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.489 04:28:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:23.489 [2024-11-03 04:28:46.532389] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:08:23.490 [2024-11-03 04:28:46.532481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64014 ] 00:08:23.748 [2024-11-03 04:28:46.695824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.748 [2024-11-03 04:28:46.821383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.748 [2024-11-03 04:28:46.821732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.748 [2024-11-03 04:28:46.821850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.748 [2024-11-03 04:28:46.822093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.684 nvme0n1 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_X73zf.txt 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.684 true 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730608127 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64037 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:24.684 04:28:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:26.626 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:26.626 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.626 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:26.626 [2024-11-03 04:28:49.533187] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:26.626 [2024-11-03 04:28:49.533472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:26.626 [2024-11-03 04:28:49.533500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:26.626 [2024-11-03 04:28:49.533515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.626 [2024-11-03 04:28:49.536944] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:26.627 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64037 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64037 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64037 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_X73zf.txt 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_X73zf.txt 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64014 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 64014 ']' 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 64014 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64014 00:08:26.627 killing process with pid 64014 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64014' 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 64014 00:08:26.627 04:28:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 64014 00:08:28.000 04:28:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:28.000 04:28:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:28.000 ************************************ 00:08:28.000 00:08:28.000 real 0m4.767s 00:08:28.000 user 0m16.903s 00:08:28.000 sys 0m0.502s 00:08:28.000 04:28:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.000 04:28:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:28.000 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:28.000 ************************************ 00:08:28.000 04:28:51 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:28.000 04:28:51 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:28.000 04:28:51 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:28.000 04:28:51 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.000 04:28:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.258 ************************************ 00:08:28.258 START TEST nvme_fio 00:08:28.258 ************************************ 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:28.258 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:28.258 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:28.516 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:28.516 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:28.516 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:28.516 04:28:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:28.516 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:28.775 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:28.775 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:28.775 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:28.775 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:28.775 04:28:51 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:28.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:28.775 fio-3.35 00:08:28.775 Starting 1 thread 00:08:34.064 00:08:34.064 test: (groupid=0, jobs=1): err= 0: pid=64171: Sun Nov 3 04:28:56 2024 00:08:34.064 read: IOPS=16.6k, BW=64.9MiB/s (68.0MB/s)(130MiB/2001msec) 00:08:34.064 slat (nsec): min=4888, max=78313, avg=6535.85, stdev=3139.89 00:08:34.064 clat (usec): min=295, max=10920, avg=3822.56, stdev=1230.30 00:08:34.064 lat (usec): min=300, max=10925, avg=3829.09, stdev=1231.50 00:08:34.064 clat percentiles (usec): 00:08:34.064 | 1.00th=[ 2343], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2933], 00:08:34.064 | 30.00th=[ 3064], 40.00th=[ 3195], 50.00th=[ 3392], 60.00th=[ 3654], 00:08:34.064 | 70.00th=[ 3949], 80.00th=[ 4621], 90.00th=[ 5669], 95.00th=[ 6521], 00:08:34.064 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[10159], 99.95th=[10552], 00:08:34.064 | 99.99th=[10814] 00:08:34.064 bw ( KiB/s): min=64928, max=68240, per=100.00%, avg=66440.00, stdev=1674.68, samples=3 00:08:34.064 iops : min=16232, max=17060, avg=16610.00, stdev=418.67, samples=3 00:08:34.064 write: IOPS=16.6k, BW=65.0MiB/s (68.2MB/s)(130MiB/2001msec); 0 zone resets 00:08:34.064 slat (usec): min=5, max=103, avg= 6.69, stdev= 3.15 00:08:34.064 clat (usec): min=206, max=11438, avg=3848.26, stdev=1241.24 00:08:34.064 lat (usec): min=211, max=11443, avg=3854.95, stdev=1242.45 00:08:34.064 clat percentiles (usec): 00:08:34.064 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2966], 00:08:34.064 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3425], 60.00th=[ 3654], 00:08:34.064 | 70.00th=[ 3982], 80.00th=[ 4621], 90.00th=[ 5669], 95.00th=[ 6521], 00:08:34.064 | 99.00th=[ 8094], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[10552], 00:08:34.064 | 99.99th=[11207] 00:08:34.064 bw ( KiB/s): min=64768, max=68448, per=99.70%, avg=66352.00, stdev=1892.67, samples=3 00:08:34.064 iops : min=16192, max=17112, avg=16588.00, stdev=473.17, samples=3 00:08:34.064 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:08:34.064 lat (msec) : 2=0.42%, 4=70.08%, 10=29.32%, 20=0.13% 00:08:34.064 cpu : usr=98.80%, sys=0.00%, ctx=5, majf=0, minf=607 00:08:34.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:34.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.064 issued rwts: total=33223,33294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.064 00:08:34.064 Run status group 0 (all jobs): 00:08:34.064 READ: bw=64.9MiB/s (68.0MB/s), 64.9MiB/s-64.9MiB/s (68.0MB/s-68.0MB/s), io=130MiB (136MB), run=2001-2001msec 00:08:34.064 WRITE: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=130MiB (136MB), run=2001-2001msec 00:08:34.064 ----------------------------------------------------- 00:08:34.064 Suppressions used: 00:08:34.064 count bytes template 00:08:34.064 1 32 /usr/src/fio/parse.c 00:08:34.064 1 8 libtcmalloc_minimal.so 00:08:34.064 ----------------------------------------------------- 00:08:34.064 00:08:34.064 04:28:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:34.064 04:28:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:34.064 04:28:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:34.064 04:28:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:34.064 04:28:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:34.064 04:28:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:34.064 04:28:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:34.064 04:28:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:34.064 04:28:57 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:34.324 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:34.324 fio-3.35 00:08:34.324 Starting 1 thread 00:08:40.910 00:08:40.910 test: (groupid=0, jobs=1): err= 0: pid=64233: Sun Nov 3 04:29:02 2024 00:08:40.910 read: IOPS=19.3k, BW=75.6MiB/s (79.2MB/s)(151MiB/2001msec) 00:08:40.910 slat (nsec): min=3470, max=84012, avg=5622.89, stdev=2610.40 00:08:40.910 clat (usec): min=332, max=9909, avg=3294.07, stdev=1125.33 00:08:40.910 lat (usec): min=337, max=9993, avg=3299.69, stdev=1126.54 00:08:40.910 clat percentiles (usec): 00:08:40.910 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:08:40.910 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3064], 00:08:40.910 | 70.00th=[ 3294], 80.00th=[ 3785], 90.00th=[ 5145], 95.00th=[ 5932], 00:08:40.910 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8094], 99.95th=[ 8291], 00:08:40.910 | 99.99th=[ 9241] 00:08:40.910 bw ( KiB/s): min=76448, max=81136, per=100.00%, avg=79130.67, stdev=2416.28, samples=3 00:08:40.910 iops : min=19112, max=20284, avg=19782.67, stdev=604.07, samples=3 00:08:40.910 write: IOPS=19.3k, BW=75.4MiB/s (79.1MB/s)(151MiB/2001msec); 0 zone resets 00:08:40.910 slat (usec): min=3, max=169, avg= 5.75, stdev= 2.79 00:08:40.910 clat (usec): min=307, max=9768, avg=3302.67, stdev=1120.16 00:08:40.910 lat (usec): min=312, max=9789, avg=3308.41, stdev=1121.32 00:08:40.910 clat percentiles (usec): 00:08:40.910 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:08:40.910 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3097], 00:08:40.910 | 70.00th=[ 3294], 80.00th=[ 3785], 90.00th=[ 5145], 95.00th=[ 5866], 00:08:40.910 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8094], 99.95th=[ 8455], 00:08:40.910 | 99.99th=[ 9110] 00:08:40.910 bw ( KiB/s): min=76344, max=81472, per=100.00%, avg=79178.67, stdev=2606.51, samples=3 00:08:40.910 iops : min=19086, max=20368, avg=19794.67, stdev=651.63, samples=3 00:08:40.910 lat (usec) : 500=0.01%, 1000=0.04% 00:08:40.910 lat (msec) : 2=1.17%, 4=80.51%, 10=18.27% 00:08:40.910 cpu : usr=98.85%, sys=0.25%, ctx=4, majf=0, minf=607 00:08:40.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:40.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:40.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:40.910 issued rwts: total=38708,38631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:40.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:40.910 00:08:40.910 Run status group 0 (all jobs): 00:08:40.910 READ: bw=75.6MiB/s (79.2MB/s), 75.6MiB/s-75.6MiB/s (79.2MB/s-79.2MB/s), io=151MiB (159MB), run=2001-2001msec 00:08:40.910 WRITE: bw=75.4MiB/s (79.1MB/s), 75.4MiB/s-75.4MiB/s (79.1MB/s-79.1MB/s), io=151MiB (158MB), run=2001-2001msec 00:08:40.910 ----------------------------------------------------- 00:08:40.910 Suppressions used: 00:08:40.910 count bytes template 00:08:40.910 1 32 /usr/src/fio/parse.c 00:08:40.910 1 8 libtcmalloc_minimal.so 00:08:40.910 ----------------------------------------------------- 00:08:40.910 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:40.910 04:29:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:40.910 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:40.911 04:29:03 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.911 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:40.911 fio-3.35 00:08:40.911 Starting 1 thread 00:08:46.225 00:08:46.225 test: (groupid=0, jobs=1): err= 0: pid=64295: Sun Nov 3 04:29:09 2024 00:08:46.225 read: IOPS=16.2k, BW=63.1MiB/s (66.2MB/s)(126MiB/2001msec) 00:08:46.225 slat (usec): min=4, max=114, avg= 6.13, stdev= 3.48 00:08:46.225 clat (usec): min=1051, max=10294, avg=3929.65, stdev=1480.26 00:08:46.225 lat (usec): min=1056, max=10305, avg=3935.78, stdev=1481.58 00:08:46.225 clat percentiles (usec): 00:08:46.225 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2704], 00:08:46.225 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3392], 60.00th=[ 3818], 00:08:46.225 | 70.00th=[ 4555], 80.00th=[ 5276], 90.00th=[ 6128], 95.00th=[ 6849], 00:08:46.225 | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9634], 00:08:46.225 | 99.99th=[ 9896] 00:08:46.225 bw ( KiB/s): min=53328, max=65944, per=94.57%, avg=61146.67, stdev=6829.14, samples=3 00:08:46.225 iops : min=13332, max=16486, avg=15286.67, stdev=1707.29, samples=3 00:08:46.225 write: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(127MiB/2001msec); 0 zone resets 00:08:46.225 slat (nsec): min=4297, max=56412, avg=6260.01, stdev=3455.40 00:08:46.225 clat (usec): min=1034, max=10408, avg=3956.61, stdev=1486.74 00:08:46.225 lat (usec): min=1039, max=10415, avg=3962.87, stdev=1488.06 00:08:46.225 clat percentiles (usec): 00:08:46.225 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2704], 00:08:46.225 | 30.00th=[ 2900], 40.00th=[ 3097], 50.00th=[ 3392], 60.00th=[ 3818], 00:08:46.225 | 70.00th=[ 4555], 80.00th=[ 5342], 90.00th=[ 6128], 95.00th=[ 6915], 00:08:46.225 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9503], 00:08:46.225 | 99.99th=[10028] 00:08:46.225 bw ( KiB/s): min=52464, max=65160, per=93.75%, avg=60698.67, stdev=7139.72, samples=3 00:08:46.225 iops : min=13116, max=16290, avg=15174.67, stdev=1784.93, samples=3 00:08:46.225 lat (msec) : 2=0.42%, 4=62.66%, 10=36.91%, 20=0.01% 00:08:46.225 cpu : usr=98.70%, sys=0.10%, ctx=2, majf=0, minf=607 00:08:46.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:46.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.225 issued rwts: total=32344,32388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.225 00:08:46.225 Run status group 0 (all jobs): 00:08:46.225 READ: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=126MiB (132MB), run=2001-2001msec 00:08:46.225 WRITE: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=127MiB (133MB), run=2001-2001msec 00:08:46.486 ----------------------------------------------------- 00:08:46.486 Suppressions used: 00:08:46.486 count bytes template 00:08:46.486 1 32 /usr/src/fio/parse.c 00:08:46.486 1 8 libtcmalloc_minimal.so 00:08:46.486 ----------------------------------------------------- 00:08:46.486 00:08:46.486 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:46.486 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:46.486 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:46.486 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:46.757 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:46.757 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:47.019 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:47.019 04:29:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:47.019 04:29:09 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.019 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:47.019 fio-3.35 00:08:47.019 Starting 1 thread 00:08:55.155 00:08:55.156 test: (groupid=0, jobs=1): err= 0: pid=64355: Sun Nov 3 04:29:17 2024 00:08:55.156 read: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(128MiB/2001msec) 00:08:55.156 slat (nsec): min=4313, max=88917, avg=6315.48, stdev=3713.68 00:08:55.156 clat (usec): min=605, max=11451, avg=3900.30, stdev=1450.66 00:08:55.156 lat (usec): min=610, max=11521, avg=3906.61, stdev=1452.17 00:08:55.156 clat percentiles (usec): 00:08:55.156 | 1.00th=[ 2057], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:08:55.156 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3326], 60.00th=[ 3752], 00:08:55.156 | 70.00th=[ 4490], 80.00th=[ 5211], 90.00th=[ 6063], 95.00th=[ 6718], 00:08:55.156 | 99.00th=[ 8029], 99.50th=[ 8356], 99.90th=[ 9110], 99.95th=[ 9765], 00:08:55.156 | 99.99th=[10814] 00:08:55.156 bw ( KiB/s): min=59552, max=68240, per=98.61%, avg=64344.00, stdev=4412.76, samples=3 00:08:55.156 iops : min=14888, max=17060, avg=16086.00, stdev=1103.19, samples=3 00:08:55.156 write: IOPS=16.3k, BW=63.9MiB/s (67.0MB/s)(128MiB/2001msec); 0 zone resets 00:08:55.156 slat (nsec): min=4352, max=88615, avg=6366.10, stdev=3647.19 00:08:55.156 clat (usec): min=1161, max=10848, avg=3910.08, stdev=1435.45 00:08:55.156 lat (usec): min=1166, max=10865, avg=3916.44, stdev=1436.92 00:08:55.156 clat percentiles (usec): 00:08:55.156 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2737], 00:08:55.156 | 30.00th=[ 2900], 40.00th=[ 3097], 50.00th=[ 3326], 60.00th=[ 3752], 00:08:55.156 | 70.00th=[ 4490], 80.00th=[ 5211], 90.00th=[ 6063], 95.00th=[ 6718], 00:08:55.156 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[ 9241], 99.95th=[ 9896], 00:08:55.156 | 99.99th=[10683] 00:08:55.156 bw ( KiB/s): min=59856, max=67664, per=97.96%, avg=64058.67, stdev=3938.12, samples=3 00:08:55.156 iops : min=14964, max=16916, avg=16014.67, stdev=984.53, samples=3 00:08:55.156 lat (usec) : 750=0.01% 00:08:55.156 lat (msec) : 2=0.76%, 4=62.66%, 10=36.54%, 20=0.04% 00:08:55.156 cpu : usr=98.60%, sys=0.20%, ctx=4, majf=0, minf=605 00:08:55.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:55.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.156 issued rwts: total=32641,32711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.156 00:08:55.156 Run status group 0 (all jobs): 00:08:55.156 READ: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=128MiB (134MB), run=2001-2001msec 00:08:55.156 WRITE: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=128MiB (134MB), run=2001-2001msec 00:08:55.156 ----------------------------------------------------- 00:08:55.156 Suppressions used: 00:08:55.156 count bytes template 00:08:55.156 1 32 /usr/src/fio/parse.c 00:08:55.156 1 8 libtcmalloc_minimal.so 00:08:55.156 ----------------------------------------------------- 00:08:55.156 00:08:55.156 ************************************ 00:08:55.156 END TEST nvme_fio 00:08:55.156 ************************************ 00:08:55.156 04:29:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:55.156 04:29:18 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:55.156 00:08:55.156 real 0m27.109s 00:08:55.156 user 0m17.736s 00:08:55.156 sys 0m15.971s 00:08:55.156 04:29:18 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.156 04:29:18 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:55.414 ************************************ 00:08:55.414 END TEST nvme 00:08:55.414 ************************************ 00:08:55.414 00:08:55.414 real 1m36.234s 00:08:55.414 user 3m38.451s 00:08:55.414 sys 0m26.215s 00:08:55.414 04:29:18 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.414 04:29:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.414 04:29:18 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:55.414 04:29:18 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:55.414 04:29:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:55.414 04:29:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.414 04:29:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.414 ************************************ 00:08:55.414 START TEST nvme_scc 00:08:55.414 ************************************ 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:55.414 * Looking for test storage... 00:08:55.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.414 04:29:18 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.414 04:29:18 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:55.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.414 --rc genhtml_branch_coverage=1 00:08:55.414 --rc genhtml_function_coverage=1 00:08:55.414 --rc genhtml_legend=1 00:08:55.414 --rc geninfo_all_blocks=1 00:08:55.415 --rc geninfo_unexecuted_blocks=1 00:08:55.415 00:08:55.415 ' 00:08:55.415 04:29:18 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.415 --rc genhtml_branch_coverage=1 00:08:55.415 --rc genhtml_function_coverage=1 00:08:55.415 --rc genhtml_legend=1 00:08:55.415 --rc geninfo_all_blocks=1 00:08:55.415 --rc geninfo_unexecuted_blocks=1 00:08:55.415 00:08:55.415 ' 00:08:55.415 04:29:18 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.415 --rc genhtml_branch_coverage=1 00:08:55.415 --rc genhtml_function_coverage=1 00:08:55.415 --rc genhtml_legend=1 00:08:55.415 --rc geninfo_all_blocks=1 00:08:55.415 --rc geninfo_unexecuted_blocks=1 00:08:55.415 00:08:55.415 ' 00:08:55.415 04:29:18 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.415 --rc genhtml_branch_coverage=1 00:08:55.415 --rc genhtml_function_coverage=1 00:08:55.415 --rc genhtml_legend=1 00:08:55.415 --rc geninfo_all_blocks=1 00:08:55.415 --rc geninfo_unexecuted_blocks=1 00:08:55.415 00:08:55.415 ' 00:08:55.415 04:29:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.415 04:29:18 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.415 04:29:18 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.415 04:29:18 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.415 04:29:18 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.415 04:29:18 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.415 04:29:18 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.415 04:29:18 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.415 04:29:18 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:55.415 04:29:18 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:55.415 04:29:18 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:55.415 04:29:18 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.415 04:29:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:55.415 04:29:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:55.415 04:29:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:55.415 04:29:18 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:55.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.987 Waiting for block devices as requested 00:08:55.987 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.987 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.248 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.248 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.552 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:01.552 04:29:24 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:01.552 04:29:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:01.552 04:29:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:01.552 04:29:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:01.552 04:29:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.552 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.553 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:01.554 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:01.555 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:01.556 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:01.557 04:29:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:01.557 04:29:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:01.557 04:29:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:01.557 04:29:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:01.557 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.558 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:01.559 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.560 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:01.561 04:29:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:01.562 04:29:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:01.562 04:29:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:01.562 04:29:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:01.562 04:29:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:01.562 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.563 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:01.564 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.565 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:01.566 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.567 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.568 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:01.569 04:29:24 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:01.569 04:29:24 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:01.569 04:29:24 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:01.569 04:29:24 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.569 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:01.570 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:01.571 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:01.572 04:29:24 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.572 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:01.573 04:29:24 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:01.573 04:29:24 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:01.573 04:29:24 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:01.573 04:29:24 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:02.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.754 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.754 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.754 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.754 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:02.754 04:29:25 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:02.754 04:29:25 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:02.754 04:29:25 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.754 04:29:25 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:02.754 ************************************ 00:09:02.754 START TEST nvme_simple_copy 00:09:02.754 ************************************ 00:09:02.754 04:29:25 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:03.016 Initializing NVMe Controllers 00:09:03.016 Attaching to 0000:00:10.0 00:09:03.016 Controller supports SCC. Attached to 0000:00:10.0 00:09:03.016 Namespace ID: 1 size: 6GB 00:09:03.016 Initialization complete. 00:09:03.016 00:09:03.016 Controller QEMU NVMe Ctrl (12340 ) 00:09:03.016 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:03.016 Namespace Block Size:4096 00:09:03.016 Writing LBAs 0 to 63 with Random Data 00:09:03.016 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:03.016 LBAs matching Written Data: 64 00:09:03.016 00:09:03.016 real 0m0.283s 00:09:03.016 user 0m0.115s 00:09:03.016 sys 0m0.066s 00:09:03.016 ************************************ 00:09:03.016 END TEST nvme_simple_copy 00:09:03.016 ************************************ 00:09:03.016 04:29:25 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.016 04:29:25 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:03.016 ************************************ 00:09:03.016 END TEST nvme_scc 00:09:03.016 ************************************ 00:09:03.016 00:09:03.016 real 0m7.737s 00:09:03.016 user 0m1.103s 00:09:03.016 sys 0m1.355s 00:09:03.016 04:29:26 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.016 04:29:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:03.016 04:29:26 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:03.016 04:29:26 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:03.016 04:29:26 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:03.016 04:29:26 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:03.016 04:29:26 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:03.016 04:29:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.016 04:29:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.016 04:29:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.016 ************************************ 00:09:03.016 START TEST nvme_fdp 00:09:03.016 ************************************ 00:09:03.016 04:29:26 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:09:03.278 * Looking for test storage... 00:09:03.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.278 --rc genhtml_branch_coverage=1 00:09:03.278 --rc genhtml_function_coverage=1 00:09:03.278 --rc genhtml_legend=1 00:09:03.278 --rc geninfo_all_blocks=1 00:09:03.278 --rc geninfo_unexecuted_blocks=1 00:09:03.278 00:09:03.278 ' 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.278 --rc genhtml_branch_coverage=1 00:09:03.278 --rc genhtml_function_coverage=1 00:09:03.278 --rc genhtml_legend=1 00:09:03.278 --rc geninfo_all_blocks=1 00:09:03.278 --rc geninfo_unexecuted_blocks=1 00:09:03.278 00:09:03.278 ' 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.278 --rc genhtml_branch_coverage=1 00:09:03.278 --rc genhtml_function_coverage=1 00:09:03.278 --rc genhtml_legend=1 00:09:03.278 --rc geninfo_all_blocks=1 00:09:03.278 --rc geninfo_unexecuted_blocks=1 00:09:03.278 00:09:03.278 ' 00:09:03.278 04:29:26 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.278 --rc genhtml_branch_coverage=1 00:09:03.278 --rc genhtml_function_coverage=1 00:09:03.278 --rc genhtml_legend=1 00:09:03.278 --rc geninfo_all_blocks=1 00:09:03.278 --rc geninfo_unexecuted_blocks=1 00:09:03.278 00:09:03.278 ' 00:09:03.278 04:29:26 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.278 04:29:26 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.278 04:29:26 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.278 04:29:26 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.278 04:29:26 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.278 04:29:26 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:03.278 04:29:26 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:03.278 04:29:26 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:03.278 04:29:26 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.278 04:29:26 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:03.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.801 Waiting for block devices as requested 00:09:03.801 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.801 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.801 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.060 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.360 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:09.360 04:29:32 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:09.360 04:29:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.360 04:29:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:09.360 04:29:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.360 04:29:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:09.360 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.361 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.362 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.363 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.364 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:09.365 04:29:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.365 04:29:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:09.365 04:29:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.365 04:29:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.365 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.366 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.367 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.368 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:09.369 04:29:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:09.370 04:29:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.370 04:29:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:09.370 04:29:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.370 04:29:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.370 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.371 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:09.372 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.373 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:09.374 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.375 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:09.376 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:09.377 04:29:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.377 04:29:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:09.377 04:29:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.377 04:29:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.377 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:09.378 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.379 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:09.380 04:29:32 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:09.380 04:29:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:09.381 04:29:32 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:09.381 04:29:32 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:09.381 04:29:32 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:09.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.534 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.534 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.534 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.534 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.534 04:29:33 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:10.534 04:29:33 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:10.534 04:29:33 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.534 04:29:33 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:10.534 ************************************ 00:09:10.534 START TEST nvme_flexible_data_placement 00:09:10.534 ************************************ 00:09:10.534 04:29:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:10.796 Initializing NVMe Controllers 00:09:10.796 Attaching to 0000:00:13.0 00:09:10.796 Controller supports FDP Attached to 0000:00:13.0 00:09:10.796 Namespace ID: 1 Endurance Group ID: 1 00:09:10.796 Initialization complete. 00:09:10.796 00:09:10.796 ================================== 00:09:10.796 == FDP tests for Namespace: #01 == 00:09:10.796 ================================== 00:09:10.796 00:09:10.796 Get Feature: FDP: 00:09:10.796 ================= 00:09:10.796 Enabled: Yes 00:09:10.796 FDP configuration Index: 0 00:09:10.796 00:09:10.796 FDP configurations log page 00:09:10.796 =========================== 00:09:10.796 Number of FDP configurations: 1 00:09:10.796 Version: 0 00:09:10.796 Size: 112 00:09:10.796 FDP Configuration Descriptor: 0 00:09:10.796 Descriptor Size: 96 00:09:10.796 Reclaim Group Identifier format: 2 00:09:10.796 FDP Volatile Write Cache: Not Present 00:09:10.796 FDP Configuration: Valid 00:09:10.796 Vendor Specific Size: 0 00:09:10.796 Number of Reclaim Groups: 2 00:09:10.796 Number of Recalim Unit Handles: 8 00:09:10.796 Max Placement Identifiers: 128 00:09:10.796 Number of Namespaces Suppprted: 256 00:09:10.796 Reclaim unit Nominal Size: 6000000 bytes 00:09:10.796 Estimated Reclaim Unit Time Limit: Not Reported 00:09:10.796 RUH Desc #000: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #001: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #002: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #003: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #004: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #005: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #006: RUH Type: Initially Isolated 00:09:10.796 RUH Desc #007: RUH Type: Initially Isolated 00:09:10.796 00:09:10.796 FDP reclaim unit handle usage log page 00:09:10.796 ====================================== 00:09:10.796 Number of Reclaim Unit Handles: 8 00:09:10.796 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:10.796 RUH Usage Desc #001: RUH Attributes: Unused 00:09:10.796 RUH Usage Desc #002: RUH Attributes: Unused 00:09:10.796 RUH Usage Desc #003: RUH Attributes: Unused 00:09:10.796 RUH Usage Desc #004: RUH Attributes: Unused 00:09:10.796 RUH Usage Desc #005: RUH Attributes: Unused 00:09:10.796 RUH Usage Desc #006: RUH Attributes: Unused 00:09:10.796 RUH Usage Desc #007: RUH Attributes: Unused 00:09:10.796 00:09:10.796 FDP statistics log page 00:09:10.796 ======================= 00:09:10.796 Host bytes with metadata written: 960307200 00:09:10.796 Media bytes with metadata written: 960544768 00:09:10.796 Media bytes erased: 0 00:09:10.796 00:09:10.796 FDP Reclaim unit handle status 00:09:10.796 ============================== 00:09:10.796 Number of RUHS descriptors: 2 00:09:10.796 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002c2e 00:09:10.796 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:10.796 00:09:10.796 FDP write on placement id: 0 success 00:09:10.796 00:09:10.796 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:10.796 00:09:10.796 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:10.796 00:09:10.796 Get Feature: FDP Events for Placement handle: #0 00:09:10.796 ======================== 00:09:10.796 Number of FDP Events: 6 00:09:10.796 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:10.796 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:10.796 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:10.796 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:10.796 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:10.796 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:10.796 00:09:10.796 FDP events log page 00:09:10.796 =================== 00:09:10.796 Number of FDP events: 1 00:09:10.796 FDP Event #0: 00:09:10.796 Event Type: RU Not Written to Capacity 00:09:10.796 Placement Identifier: Valid 00:09:10.796 NSID: Valid 00:09:10.796 Location: Valid 00:09:10.796 Placement Identifier: 0 00:09:10.796 Event Timestamp: f 00:09:10.796 Namespace Identifier: 1 00:09:10.796 Reclaim Group Identifier: 0 00:09:10.796 Reclaim Unit Handle Identifier: 0 00:09:10.796 00:09:10.796 FDP test passed 00:09:10.796 ************************************ 00:09:10.796 END TEST nvme_flexible_data_placement 00:09:10.796 ************************************ 00:09:10.796 00:09:10.796 real 0m0.245s 00:09:10.796 user 0m0.074s 00:09:10.796 sys 0m0.070s 00:09:10.796 04:29:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.796 04:29:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:10.796 ************************************ 00:09:10.796 END TEST nvme_fdp 00:09:10.796 ************************************ 00:09:10.796 00:09:10.796 real 0m7.697s 00:09:10.796 user 0m1.042s 00:09:10.796 sys 0m1.352s 00:09:10.796 04:29:33 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.796 04:29:33 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:10.796 04:29:33 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:10.796 04:29:33 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:10.796 04:29:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:10.796 04:29:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.796 04:29:33 -- common/autotest_common.sh@10 -- # set +x 00:09:10.796 ************************************ 00:09:10.796 START TEST nvme_rpc 00:09:10.796 ************************************ 00:09:10.796 04:29:33 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:11.057 * Looking for test storage... 00:09:11.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.057 04:29:33 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:11.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.057 --rc genhtml_branch_coverage=1 00:09:11.057 --rc genhtml_function_coverage=1 00:09:11.057 --rc genhtml_legend=1 00:09:11.057 --rc geninfo_all_blocks=1 00:09:11.057 --rc geninfo_unexecuted_blocks=1 00:09:11.057 00:09:11.057 ' 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:11.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.057 --rc genhtml_branch_coverage=1 00:09:11.057 --rc genhtml_function_coverage=1 00:09:11.057 --rc genhtml_legend=1 00:09:11.057 --rc geninfo_all_blocks=1 00:09:11.057 --rc geninfo_unexecuted_blocks=1 00:09:11.057 00:09:11.057 ' 00:09:11.057 04:29:33 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:11.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.057 --rc genhtml_branch_coverage=1 00:09:11.057 --rc genhtml_function_coverage=1 00:09:11.057 --rc genhtml_legend=1 00:09:11.057 --rc geninfo_all_blocks=1 00:09:11.057 --rc geninfo_unexecuted_blocks=1 00:09:11.057 00:09:11.057 ' 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:11.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.058 --rc genhtml_branch_coverage=1 00:09:11.058 --rc genhtml_function_coverage=1 00:09:11.058 --rc genhtml_legend=1 00:09:11.058 --rc geninfo_all_blocks=1 00:09:11.058 --rc geninfo_unexecuted_blocks=1 00:09:11.058 00:09:11.058 ' 00:09:11.058 04:29:33 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.058 04:29:33 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:11.058 04:29:33 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:11.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.058 04:29:34 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:11.058 04:29:34 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65711 00:09:11.058 04:29:34 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:11.058 04:29:34 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65711 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65711 ']' 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.058 04:29:34 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:11.058 04:29:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.058 [2024-11-03 04:29:34.117994] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:09:11.058 [2024-11-03 04:29:34.118266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65711 ] 00:09:11.318 [2024-11-03 04:29:34.276279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.318 [2024-11-03 04:29:34.377878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.318 [2024-11-03 04:29:34.377952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.889 04:29:34 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.889 04:29:34 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:11.889 04:29:34 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:12.149 Nvme0n1 00:09:12.410 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:12.410 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:12.410 request: 00:09:12.410 { 00:09:12.410 "bdev_name": "Nvme0n1", 00:09:12.410 "filename": "non_existing_file", 00:09:12.410 "method": "bdev_nvme_apply_firmware", 00:09:12.410 "req_id": 1 00:09:12.410 } 00:09:12.410 Got JSON-RPC error response 00:09:12.410 response: 00:09:12.410 { 00:09:12.410 "code": -32603, 00:09:12.410 "message": "open file failed." 00:09:12.410 } 00:09:12.410 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:12.410 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:12.410 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:12.671 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:12.671 04:29:35 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65711 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65711 ']' 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65711 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65711 00:09:12.671 killing process with pid 65711 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65711' 00:09:12.671 04:29:35 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65711 00:09:12.672 04:29:35 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65711 00:09:14.054 ************************************ 00:09:14.054 END TEST nvme_rpc 00:09:14.054 ************************************ 00:09:14.054 00:09:14.054 real 0m3.253s 00:09:14.054 user 0m6.224s 00:09:14.054 sys 0m0.483s 00:09:14.054 04:29:37 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.054 04:29:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.314 04:29:37 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:14.314 04:29:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:14.314 04:29:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.314 04:29:37 -- common/autotest_common.sh@10 -- # set +x 00:09:14.314 ************************************ 00:09:14.314 START TEST nvme_rpc_timeouts 00:09:14.314 ************************************ 00:09:14.314 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:14.315 * Looking for test storage... 00:09:14.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.315 04:29:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.315 --rc genhtml_branch_coverage=1 00:09:14.315 --rc genhtml_function_coverage=1 00:09:14.315 --rc genhtml_legend=1 00:09:14.315 --rc geninfo_all_blocks=1 00:09:14.315 --rc geninfo_unexecuted_blocks=1 00:09:14.315 00:09:14.315 ' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.315 --rc genhtml_branch_coverage=1 00:09:14.315 --rc genhtml_function_coverage=1 00:09:14.315 --rc genhtml_legend=1 00:09:14.315 --rc geninfo_all_blocks=1 00:09:14.315 --rc geninfo_unexecuted_blocks=1 00:09:14.315 00:09:14.315 ' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.315 --rc genhtml_branch_coverage=1 00:09:14.315 --rc genhtml_function_coverage=1 00:09:14.315 --rc genhtml_legend=1 00:09:14.315 --rc geninfo_all_blocks=1 00:09:14.315 --rc geninfo_unexecuted_blocks=1 00:09:14.315 00:09:14.315 ' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.315 --rc genhtml_branch_coverage=1 00:09:14.315 --rc genhtml_function_coverage=1 00:09:14.315 --rc genhtml_legend=1 00:09:14.315 --rc geninfo_all_blocks=1 00:09:14.315 --rc geninfo_unexecuted_blocks=1 00:09:14.315 00:09:14.315 ' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65776 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65776 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65808 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65808 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65808 ']' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.315 04:29:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:14.315 [2024-11-03 04:29:37.369425] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:09:14.315 [2024-11-03 04:29:37.369725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65808 ] 00:09:14.576 [2024-11-03 04:29:37.531668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.576 [2024-11-03 04:29:37.631718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.576 [2024-11-03 04:29:37.631795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.145 04:29:38 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.145 04:29:38 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:09:15.145 04:29:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:15.145 Checking default timeout settings: 00:09:15.145 04:29:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:15.712 04:29:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:15.712 Making settings changes with rpc: 00:09:15.712 04:29:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:15.712 04:29:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:15.712 Check default vs. modified settings: 00:09:15.712 04:29:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:15.974 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:15.974 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:15.974 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65776 00:09:15.974 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:15.974 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65776 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:16.286 Setting action_on_timeout is changed as expected. 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65776 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65776 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:16.286 Setting timeout_us is changed as expected. 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65776 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65776 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.286 Setting timeout_admin_us is changed as expected. 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65776 /tmp/settings_modified_65776 00:09:16.286 04:29:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65808 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65808 ']' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65808 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65808 00:09:16.286 killing process with pid 65808 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65808' 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65808 00:09:16.286 04:29:39 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65808 00:09:17.660 RPC TIMEOUT SETTING TEST PASSED. 00:09:17.660 04:29:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:17.660 00:09:17.660 real 0m3.380s 00:09:17.660 user 0m6.586s 00:09:17.660 sys 0m0.474s 00:09:17.660 04:29:40 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.660 04:29:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:17.660 ************************************ 00:09:17.660 END TEST nvme_rpc_timeouts 00:09:17.660 ************************************ 00:09:17.660 04:29:40 -- spdk/autotest.sh@239 -- # uname -s 00:09:17.660 04:29:40 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:17.660 04:29:40 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:17.660 04:29:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:17.660 04:29:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.660 04:29:40 -- common/autotest_common.sh@10 -- # set +x 00:09:17.660 ************************************ 00:09:17.660 START TEST sw_hotplug 00:09:17.660 ************************************ 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:17.660 * Looking for test storage... 00:09:17.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.660 04:29:40 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:17.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.660 --rc genhtml_branch_coverage=1 00:09:17.660 --rc genhtml_function_coverage=1 00:09:17.660 --rc genhtml_legend=1 00:09:17.660 --rc geninfo_all_blocks=1 00:09:17.660 --rc geninfo_unexecuted_blocks=1 00:09:17.660 00:09:17.660 ' 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:17.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.660 --rc genhtml_branch_coverage=1 00:09:17.660 --rc genhtml_function_coverage=1 00:09:17.660 --rc genhtml_legend=1 00:09:17.660 --rc geninfo_all_blocks=1 00:09:17.660 --rc geninfo_unexecuted_blocks=1 00:09:17.660 00:09:17.660 ' 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:17.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.660 --rc genhtml_branch_coverage=1 00:09:17.660 --rc genhtml_function_coverage=1 00:09:17.660 --rc genhtml_legend=1 00:09:17.660 --rc geninfo_all_blocks=1 00:09:17.660 --rc geninfo_unexecuted_blocks=1 00:09:17.660 00:09:17.660 ' 00:09:17.660 04:29:40 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:17.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.660 --rc genhtml_branch_coverage=1 00:09:17.660 --rc genhtml_function_coverage=1 00:09:17.660 --rc genhtml_legend=1 00:09:17.660 --rc geninfo_all_blocks=1 00:09:17.660 --rc geninfo_unexecuted_blocks=1 00:09:17.660 00:09:17.660 ' 00:09:17.660 04:29:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:17.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.177 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.177 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.177 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.177 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.177 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:18.178 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:18.178 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:18.178 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:18.178 04:29:41 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:18.178 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:18.178 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:18.178 04:29:41 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.694 Waiting for block devices as requested 00:09:18.694 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.694 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.694 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.952 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.231 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.231 04:29:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:24.231 04:29:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.231 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:24.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.231 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:24.492 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:24.753 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.753 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:25.014 04:29:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66668 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:25.014 04:29:47 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:25.014 04:29:47 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:25.014 04:29:47 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:25.014 04:29:47 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:25.014 04:29:47 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:25.014 04:29:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:25.275 Initializing NVMe Controllers 00:09:25.275 Attaching to 0000:00:10.0 00:09:25.275 Attaching to 0000:00:11.0 00:09:25.275 Attached to 0000:00:11.0 00:09:25.275 Attached to 0000:00:10.0 00:09:25.275 Initialization complete. Starting I/O... 00:09:25.275 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:25.275 QEMU NVMe Ctrl (12340 ): 1 I/Os completed (+1) 00:09:25.275 00:09:26.217 QEMU NVMe Ctrl (12341 ): 2656 I/Os completed (+2656) 00:09:26.217 QEMU NVMe Ctrl (12340 ): 2657 I/Os completed (+2656) 00:09:26.217 00:09:27.162 QEMU NVMe Ctrl (12341 ): 6062 I/Os completed (+3406) 00:09:27.162 QEMU NVMe Ctrl (12340 ): 6065 I/Os completed (+3408) 00:09:27.162 00:09:28.098 QEMU NVMe Ctrl (12341 ): 9599 I/Os completed (+3537) 00:09:28.098 QEMU NVMe Ctrl (12340 ): 9659 I/Os completed (+3594) 00:09:28.098 00:09:29.482 QEMU NVMe Ctrl (12341 ): 12844 I/Os completed (+3245) 00:09:29.482 QEMU NVMe Ctrl (12340 ): 12906 I/Os completed (+3247) 00:09:29.482 00:09:30.422 QEMU NVMe Ctrl (12341 ): 16100 I/Os completed (+3256) 00:09:30.422 QEMU NVMe Ctrl (12340 ): 16158 I/Os completed (+3252) 00:09:30.422 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:30.988 [2024-11-03 04:29:53.958726] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:30.988 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:30.988 [2024-11-03 04:29:53.959812] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.959858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.959873] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.959888] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:30.988 [2024-11-03 04:29:53.961302] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.961343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.961355] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.961366] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:30.988 [2024-11-03 04:29:53.981547] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:30.988 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:30.988 [2024-11-03 04:29:53.982422] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.982458] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.982477] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.982492] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:30.988 [2024-11-03 04:29:53.983858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.983890] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.983902] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 [2024-11-03 04:29:53.983912] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:30.988 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:30.988 04:29:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:30.988 EAL: Scan for (pci) bus failed. 00:09:30.988 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:30.988 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:30.988 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:31.246 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:31.246 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.246 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:31.246 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:31.246 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:31.246 Attaching to 0000:00:10.0 00:09:31.246 Attached to 0000:00:10.0 00:09:31.246 QEMU NVMe Ctrl (12340 ): 60 I/Os completed (+60) 00:09:31.246 00:09:31.246 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:31.247 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.247 04:29:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:31.247 Attaching to 0000:00:11.0 00:09:31.247 Attached to 0000:00:11.0 00:09:32.187 QEMU NVMe Ctrl (12340 ): 3587 I/Os completed (+3527) 00:09:32.187 QEMU NVMe Ctrl (12341 ): 3288 I/Os completed (+3288) 00:09:32.187 00:09:33.131 QEMU NVMe Ctrl (12340 ): 6839 I/Os completed (+3252) 00:09:33.131 QEMU NVMe Ctrl (12341 ): 6540 I/Os completed (+3252) 00:09:33.131 00:09:34.515 QEMU NVMe Ctrl (12340 ): 10099 I/Os completed (+3260) 00:09:34.515 QEMU NVMe Ctrl (12341 ): 9801 I/Os completed (+3261) 00:09:34.515 00:09:35.081 QEMU NVMe Ctrl (12340 ): 13653 I/Os completed (+3554) 00:09:35.081 QEMU NVMe Ctrl (12341 ): 13360 I/Os completed (+3559) 00:09:35.081 00:09:36.456 QEMU NVMe Ctrl (12340 ): 17355 I/Os completed (+3702) 00:09:36.456 QEMU NVMe Ctrl (12341 ): 17058 I/Os completed (+3698) 00:09:36.456 00:09:37.392 QEMU NVMe Ctrl (12340 ): 21033 I/Os completed (+3678) 00:09:37.392 QEMU NVMe Ctrl (12341 ): 20752 I/Os completed (+3694) 00:09:37.392 00:09:38.332 QEMU NVMe Ctrl (12340 ): 24495 I/Os completed (+3462) 00:09:38.332 QEMU NVMe Ctrl (12341 ): 24193 I/Os completed (+3441) 00:09:38.332 00:09:39.271 QEMU NVMe Ctrl (12340 ): 27863 I/Os completed (+3368) 00:09:39.271 QEMU NVMe Ctrl (12341 ): 27568 I/Os completed (+3375) 00:09:39.271 00:09:40.205 QEMU NVMe Ctrl (12340 ): 31546 I/Os completed (+3683) 00:09:40.205 QEMU NVMe Ctrl (12341 ): 31246 I/Os completed (+3678) 00:09:40.205 00:09:41.166 QEMU NVMe Ctrl (12340 ): 35157 I/Os completed (+3611) 00:09:41.166 QEMU NVMe Ctrl (12341 ): 34904 I/Os completed (+3658) 00:09:41.166 00:09:42.111 QEMU NVMe Ctrl (12340 ): 38081 I/Os completed (+2924) 00:09:42.111 QEMU NVMe Ctrl (12341 ): 37828 I/Os completed (+2924) 00:09:42.111 00:09:43.486 QEMU NVMe Ctrl (12340 ): 41710 I/Os completed (+3629) 00:09:43.486 QEMU NVMe Ctrl (12341 ): 41461 I/Os completed (+3633) 00:09:43.486 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:43.486 [2024-11-03 04:30:06.224283] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:43.486 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:43.486 [2024-11-03 04:30:06.225236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.225275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.225294] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.225307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:43.486 [2024-11-03 04:30:06.226868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.226907] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.226919] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.226932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:43.486 [2024-11-03 04:30:06.251046] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:43.486 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:43.486 [2024-11-03 04:30:06.251910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.251941] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.251959] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.251972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:43.486 [2024-11-03 04:30:06.253299] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.253326] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.253338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 [2024-11-03 04:30:06.253350] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.486 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:43.486 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:43.486 EAL: Scan for (pci) bus failed. 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:43.487 Attaching to 0000:00:10.0 00:09:43.487 Attached to 0000:00:10.0 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:43.487 04:30:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:43.487 Attaching to 0000:00:11.0 00:09:43.487 Attached to 0000:00:11.0 00:09:44.420 QEMU NVMe Ctrl (12340 ): 2712 I/Os completed (+2712) 00:09:44.420 QEMU NVMe Ctrl (12341 ): 2383 I/Os completed (+2383) 00:09:44.420 00:09:45.354 QEMU NVMe Ctrl (12340 ): 6405 I/Os completed (+3693) 00:09:45.354 QEMU NVMe Ctrl (12341 ): 6096 I/Os completed (+3713) 00:09:45.354 00:09:46.297 QEMU NVMe Ctrl (12340 ): 9710 I/Os completed (+3305) 00:09:46.297 QEMU NVMe Ctrl (12341 ): 9440 I/Os completed (+3344) 00:09:46.297 00:09:47.234 QEMU NVMe Ctrl (12340 ): 13106 I/Os completed (+3396) 00:09:47.234 QEMU NVMe Ctrl (12341 ): 12830 I/Os completed (+3390) 00:09:47.234 00:09:48.166 QEMU NVMe Ctrl (12340 ): 16836 I/Os completed (+3730) 00:09:48.166 QEMU NVMe Ctrl (12341 ): 16571 I/Os completed (+3741) 00:09:48.166 00:09:49.105 QEMU NVMe Ctrl (12340 ): 20339 I/Os completed (+3503) 00:09:49.105 QEMU NVMe Ctrl (12341 ): 20072 I/Os completed (+3501) 00:09:49.105 00:09:50.489 QEMU NVMe Ctrl (12340 ): 23622 I/Os completed (+3283) 00:09:50.489 QEMU NVMe Ctrl (12341 ): 23348 I/Os completed (+3276) 00:09:50.489 00:09:51.441 QEMU NVMe Ctrl (12340 ): 27354 I/Os completed (+3732) 00:09:51.441 QEMU NVMe Ctrl (12341 ): 27077 I/Os completed (+3729) 00:09:51.441 00:09:52.387 QEMU NVMe Ctrl (12340 ): 31065 I/Os completed (+3711) 00:09:52.387 QEMU NVMe Ctrl (12341 ): 30788 I/Os completed (+3711) 00:09:52.387 00:09:53.319 QEMU NVMe Ctrl (12340 ): 34739 I/Os completed (+3674) 00:09:53.319 QEMU NVMe Ctrl (12341 ): 34462 I/Os completed (+3674) 00:09:53.319 00:09:54.256 QEMU NVMe Ctrl (12340 ): 38104 I/Os completed (+3365) 00:09:54.256 QEMU NVMe Ctrl (12341 ): 37915 I/Os completed (+3453) 00:09:54.256 00:09:55.199 QEMU NVMe Ctrl (12340 ): 40825 I/Os completed (+2721) 00:09:55.199 QEMU NVMe Ctrl (12341 ): 40612 I/Os completed (+2697) 00:09:55.199 00:09:55.461 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:55.461 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:55.461 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.461 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.461 [2024-11-03 04:30:18.512342] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:55.461 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:55.461 [2024-11-03 04:30:18.513756] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.513820] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.513840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.513860] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:55.461 [2024-11-03 04:30:18.516806] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.516894] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.516911] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.516930] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.461 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.461 [2024-11-03 04:30:18.538648] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:55.461 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:55.461 [2024-11-03 04:30:18.539924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.539983] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.540003] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.540019] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:55.461 [2024-11-03 04:30:18.541919] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.541973] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.541996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.461 [2024-11-03 04:30:18.542014] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:55.724 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:55.724 EAL: Scan for (pci) bus failed. 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.724 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:55.724 Attaching to 0000:00:10.0 00:09:55.724 Attached to 0000:00:10.0 00:09:55.985 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:55.985 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.985 04:30:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:55.985 Attaching to 0000:00:11.0 00:09:55.985 Attached to 0000:00:11.0 00:09:55.985 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:55.985 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:55.985 [2024-11-03 04:30:18.840513] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:08.225 04:30:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:08.225 04:30:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.225 04:30:30 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.88 00:10:08.225 04:30:30 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.88 00:10:08.225 04:30:30 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:08.225 04:30:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.88 00:10:08.225 04:30:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.88 2 00:10:08.225 remove_attach_helper took 42.88s to complete (handling 2 nvme drive(s)) 04:30:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66668 00:10:14.846 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66668) - No such process 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66668 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67216 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:14.846 04:30:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67216 00:10:14.846 04:30:36 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67216 ']' 00:10:14.846 04:30:36 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.846 04:30:36 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.846 04:30:36 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.846 04:30:36 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.846 04:30:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 [2024-11-03 04:30:36.933506] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:10:14.846 [2024-11-03 04:30:36.933674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67216 ] 00:10:14.846 [2024-11-03 04:30:37.099447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.846 [2024-11-03 04:30:37.221606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:14.846 04:30:37 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:14.846 04:30:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.415 04:30:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.415 04:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.415 04:30:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:21.415 04:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:21.415 [2024-11-03 04:30:44.010083] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:21.415 [2024-11-03 04:30:44.011285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.011321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.011334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.011352] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.011359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.011368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.011375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.011382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.011389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.011400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.011407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.011414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.410069] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:21.415 [2024-11-03 04:30:44.411274] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.411305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.411316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.411329] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.411338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.411345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.411353] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.411360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.411367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 [2024-11-03 04:30:44.411374] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.415 [2024-11-03 04:30:44.411382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.415 [2024-11-03 04:30:44.411389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.415 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:21.415 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.673 04:30:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.673 04:30:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.673 04:30:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.673 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:21.930 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:21.930 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.930 04:30:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.125 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:34.125 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:34.125 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:34.125 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.125 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.125 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.125 04:30:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.126 04:30:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.126 04:30:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.126 04:30:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.126 04:30:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.126 04:30:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:34.126 04:30:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:34.126 [2024-11-03 04:30:56.910267] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:34.126 [2024-11-03 04:30:56.911487] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.126 [2024-11-03 04:30:56.911522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.126 [2024-11-03 04:30:56.911532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.126 [2024-11-03 04:30:56.911550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.126 [2024-11-03 04:30:56.911567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.126 [2024-11-03 04:30:56.911576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.126 [2024-11-03 04:30:56.911584] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.126 [2024-11-03 04:30:56.911592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.126 [2024-11-03 04:30:56.911598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.126 [2024-11-03 04:30:56.911606] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.126 [2024-11-03 04:30:56.911613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.126 [2024-11-03 04:30:56.911620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.384 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:34.384 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.384 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.384 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.384 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.384 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.385 04:30:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.385 04:30:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.385 [2024-11-03 04:30:57.410260] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:34.385 [2024-11-03 04:30:57.411419] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.385 [2024-11-03 04:30:57.411449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.385 [2024-11-03 04:30:57.411462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.385 [2024-11-03 04:30:57.411476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.385 [2024-11-03 04:30:57.411485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.385 [2024-11-03 04:30:57.411491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.385 [2024-11-03 04:30:57.411500] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.385 [2024-11-03 04:30:57.411506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.385 [2024-11-03 04:30:57.411514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.385 [2024-11-03 04:30:57.411521] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.385 [2024-11-03 04:30:57.411528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.385 [2024-11-03 04:30:57.411535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.385 04:30:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.385 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:34.385 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.952 04:30:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.952 04:30:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.952 04:30:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:34.952 04:30:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.952 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.952 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.952 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.209 04:30:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.409 04:31:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.409 04:31:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 04:31:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.409 04:31:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.409 04:31:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 04:31:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.409 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.409 [2024-11-03 04:31:10.310462] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:47.409 [2024-11-03 04:31:10.312232] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.409 [2024-11-03 04:31:10.312266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.409 [2024-11-03 04:31:10.312277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.409 [2024-11-03 04:31:10.312293] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.409 [2024-11-03 04:31:10.312300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.409 [2024-11-03 04:31:10.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.409 [2024-11-03 04:31:10.312317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.409 [2024-11-03 04:31:10.312324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.409 [2024-11-03 04:31:10.312331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.409 [2024-11-03 04:31:10.312339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.409 [2024-11-03 04:31:10.312345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.409 [2024-11-03 04:31:10.312353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.976 04:31:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.976 04:31:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.976 [2024-11-03 04:31:10.810469] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:47.976 [2024-11-03 04:31:10.811794] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.976 [2024-11-03 04:31:10.811825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.976 [2024-11-03 04:31:10.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.976 [2024-11-03 04:31:10.811850] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.976 [2024-11-03 04:31:10.811859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.976 [2024-11-03 04:31:10.811866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.976 [2024-11-03 04:31:10.811875] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.976 [2024-11-03 04:31:10.811881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.976 [2024-11-03 04:31:10.811891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.976 [2024-11-03 04:31:10.811897] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.976 [2024-11-03 04:31:10.811906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.976 [2024-11-03 04:31:10.811912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.976 04:31:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.976 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.977 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.977 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.977 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.977 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.977 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.977 04:31:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.977 04:31:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.235 04:31:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.235 04:31:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:11:00.434 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:00.434 04:31:23 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:00.434 04:31:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:07.006 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.007 04:31:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.007 04:31:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.007 04:31:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.007 [2024-11-03 04:31:29.228370] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:07.007 [2024-11-03 04:31:29.229490] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.229525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.229535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.229552] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.229568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.229576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.229583] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.229591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.229597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.229606] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.229612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.229622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.628365] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:07.007 [2024-11-03 04:31:29.629249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.629279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.629291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.629302] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.629310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.629317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.629327] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.629334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.629342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 [2024-11-03 04:31:29.629349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.007 [2024-11-03 04:31:29.629357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.007 [2024-11-03 04:31:29.629363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.007 04:31:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.007 04:31:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.007 04:31:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.007 04:31:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.208 04:31:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.208 04:31:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.208 04:31:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.208 04:31:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.208 04:31:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.208 04:31:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.208 04:31:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.208 04:31:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.208 04:31:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.208 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.208 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.208 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.208 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.208 [2024-11-03 04:31:42.028575] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.208 [2024-11-03 04:31:42.029931] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.208 [2024-11-03 04:31:42.030032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.208 [2024-11-03 04:31:42.030115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.208 [2024-11-03 04:31:42.030164] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.208 [2024-11-03 04:31:42.030184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.208 [2024-11-03 04:31:42.030210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.208 [2024-11-03 04:31:42.030293] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.208 [2024-11-03 04:31:42.030312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.208 [2024-11-03 04:31:42.030335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.208 [2024-11-03 04:31:42.030390] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.209 [2024-11-03 04:31:42.030408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.209 [2024-11-03 04:31:42.030432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.209 04:31:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.209 04:31:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.209 04:31:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:19.209 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.467 [2024-11-03 04:31:42.428570] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.467 [2024-11-03 04:31:42.429724] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.467 [2024-11-03 04:31:42.429752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.467 [2024-11-03 04:31:42.429763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.467 [2024-11-03 04:31:42.429775] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.467 [2024-11-03 04:31:42.429784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.467 [2024-11-03 04:31:42.429791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.467 [2024-11-03 04:31:42.429799] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.467 [2024-11-03 04:31:42.429806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.467 [2024-11-03 04:31:42.429814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.467 [2024-11-03 04:31:42.429821] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.467 [2024-11-03 04:31:42.429829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.467 [2024-11-03 04:31:42.429835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.725 04:31:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.725 04:31:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.725 04:31:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.725 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.983 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.983 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.983 04:31:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.184 04:31:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.184 04:31:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.184 04:31:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.184 [2024-11-03 04:31:54.928786] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:32.184 [2024-11-03 04:31:54.929910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.184 [2024-11-03 04:31:54.930018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.184 [2024-11-03 04:31:54.930083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.184 [2024-11-03 04:31:54.930144] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.184 [2024-11-03 04:31:54.930164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.184 [2024-11-03 04:31:54.930216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.184 [2024-11-03 04:31:54.930242] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.184 [2024-11-03 04:31:54.930291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.184 [2024-11-03 04:31:54.930317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.184 [2024-11-03 04:31:54.930366] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.184 [2024-11-03 04:31:54.930384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.184 [2024-11-03 04:31:54.930409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.184 04:31:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.184 04:31:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.184 04:31:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:32.184 04:31:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.443 [2024-11-03 04:31:55.428785] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:32.443 [2024-11-03 04:31:55.429801] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.443 [2024-11-03 04:31:55.429901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.443 [2024-11-03 04:31:55.429964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.443 [2024-11-03 04:31:55.430024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.443 [2024-11-03 04:31:55.430045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.443 [2024-11-03 04:31:55.430094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.443 [2024-11-03 04:31:55.430120] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.443 [2024-11-03 04:31:55.430202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.443 [2024-11-03 04:31:55.430242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.443 [2024-11-03 04:31:55.430266] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.443 [2024-11-03 04:31:55.430286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.443 [2024-11-03 04:31:55.430342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.443 04:31:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.443 04:31:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.443 04:31:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.443 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.701 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.702 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.702 04:31:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.903 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:44.903 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:44.903 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:44.903 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.903 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.903 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.903 04:32:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.903 04:32:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.903 04:32:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.904 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:44.904 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.65 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.65 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:44.904 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:11:44.904 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:11:44.904 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:44.904 04:32:07 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67216 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67216 ']' 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67216 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67216 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67216' 00:11:44.904 killing process with pid 67216 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67216 00:11:44.904 04:32:07 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67216 00:11:46.286 04:32:08 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:46.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.859 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.859 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.859 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.859 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.859 00:11:46.859 real 2m29.363s 00:11:46.859 user 1m51.013s 00:11:46.859 sys 0m16.935s 00:11:46.859 04:32:09 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.859 ************************************ 00:11:46.859 END TEST sw_hotplug 00:11:46.859 ************************************ 00:11:46.859 04:32:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.121 04:32:09 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:47.121 04:32:09 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:47.121 04:32:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:47.121 04:32:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.121 04:32:09 -- common/autotest_common.sh@10 -- # set +x 00:11:47.121 ************************************ 00:11:47.121 START TEST nvme_xnvme 00:11:47.121 ************************************ 00:11:47.121 04:32:09 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:47.121 * Looking for test storage... 00:11:47.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.121 04:32:10 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:47.121 04:32:10 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:11:47.121 04:32:10 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:47.121 04:32:10 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.121 04:32:10 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:47.121 04:32:10 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.121 04:32:10 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:47.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.121 --rc genhtml_branch_coverage=1 00:11:47.121 --rc genhtml_function_coverage=1 00:11:47.122 --rc genhtml_legend=1 00:11:47.122 --rc geninfo_all_blocks=1 00:11:47.122 --rc geninfo_unexecuted_blocks=1 00:11:47.122 00:11:47.122 ' 00:11:47.122 04:32:10 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.122 --rc genhtml_branch_coverage=1 00:11:47.122 --rc genhtml_function_coverage=1 00:11:47.122 --rc genhtml_legend=1 00:11:47.122 --rc geninfo_all_blocks=1 00:11:47.122 --rc geninfo_unexecuted_blocks=1 00:11:47.122 00:11:47.122 ' 00:11:47.122 04:32:10 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.122 --rc genhtml_branch_coverage=1 00:11:47.122 --rc genhtml_function_coverage=1 00:11:47.122 --rc genhtml_legend=1 00:11:47.122 --rc geninfo_all_blocks=1 00:11:47.122 --rc geninfo_unexecuted_blocks=1 00:11:47.122 00:11:47.122 ' 00:11:47.122 04:32:10 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.122 --rc genhtml_branch_coverage=1 00:11:47.122 --rc genhtml_function_coverage=1 00:11:47.122 --rc genhtml_legend=1 00:11:47.122 --rc geninfo_all_blocks=1 00:11:47.122 --rc geninfo_unexecuted_blocks=1 00:11:47.122 00:11:47.122 ' 00:11:47.122 04:32:10 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.122 04:32:10 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.122 04:32:10 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.122 04:32:10 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.122 04:32:10 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.122 04:32:10 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.122 04:32:10 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.122 04:32:10 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.122 04:32:10 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:47.122 04:32:10 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.122 04:32:10 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:47.122 04:32:10 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:47.122 04:32:10 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.122 04:32:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:47.122 ************************************ 00:11:47.122 START TEST xnvme_to_malloc_dd_copy 00:11:47.122 ************************************ 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:47.122 04:32:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:47.384 { 00:11:47.384 "subsystems": [ 00:11:47.384 { 00:11:47.384 "subsystem": "bdev", 00:11:47.384 "config": [ 00:11:47.384 { 00:11:47.384 "params": { 00:11:47.384 "block_size": 512, 00:11:47.384 "num_blocks": 2097152, 00:11:47.384 "name": "malloc0" 00:11:47.384 }, 00:11:47.384 "method": "bdev_malloc_create" 00:11:47.384 }, 00:11:47.384 { 00:11:47.384 "params": { 00:11:47.384 "io_mechanism": "libaio", 00:11:47.384 "filename": "/dev/nullb0", 00:11:47.384 "name": "null0" 00:11:47.384 }, 00:11:47.384 "method": "bdev_xnvme_create" 00:11:47.384 }, 00:11:47.384 { 00:11:47.384 "method": "bdev_wait_for_examine" 00:11:47.384 } 00:11:47.384 ] 00:11:47.384 } 00:11:47.384 ] 00:11:47.384 } 00:11:47.384 [2024-11-03 04:32:10.270949] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:11:47.384 [2024-11-03 04:32:10.271090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68586 ] 00:11:47.384 [2024-11-03 04:32:10.434364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.646 [2024-11-03 04:32:10.554445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.563  [2024-11-03T04:32:14.032Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-03T04:32:14.599Z] Copying: 461/1024 [MB] (235 MBps) [2024-11-03T04:32:15.534Z] Copying: 762/1024 [MB] (300 MBps) [2024-11-03T04:32:17.477Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:11:54.393 00:11:54.393 04:32:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:54.393 04:32:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:54.393 04:32:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:54.393 04:32:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.393 { 00:11:54.393 "subsystems": [ 00:11:54.393 { 00:11:54.393 "subsystem": "bdev", 00:11:54.393 "config": [ 00:11:54.393 { 00:11:54.393 "params": { 00:11:54.393 "block_size": 512, 00:11:54.393 "num_blocks": 2097152, 00:11:54.393 "name": "malloc0" 00:11:54.393 }, 00:11:54.393 "method": "bdev_malloc_create" 00:11:54.393 }, 00:11:54.393 { 00:11:54.393 "params": { 00:11:54.393 "io_mechanism": "libaio", 00:11:54.393 "filename": "/dev/nullb0", 00:11:54.393 "name": "null0" 00:11:54.393 }, 00:11:54.393 "method": "bdev_xnvme_create" 00:11:54.393 }, 00:11:54.393 { 00:11:54.393 "method": "bdev_wait_for_examine" 00:11:54.393 } 00:11:54.393 ] 00:11:54.393 } 00:11:54.393 ] 00:11:54.393 } 00:11:54.393 [2024-11-03 04:32:17.439651] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:11:54.393 [2024-11-03 04:32:17.439769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68673 ] 00:11:54.652 [2024-11-03 04:32:17.598279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.652 [2024-11-03 04:32:17.684441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.551  [2024-11-03T04:32:20.583Z] Copying: 304/1024 [MB] (304 MBps) [2024-11-03T04:32:21.517Z] Copying: 609/1024 [MB] (305 MBps) [2024-11-03T04:32:22.084Z] Copying: 914/1024 [MB] (305 MBps) [2024-11-03T04:32:23.985Z] Copying: 1024/1024 [MB] (average 305 MBps) 00:12:00.901 00:12:00.901 04:32:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:00.901 04:32:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:00.901 04:32:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:00.901 04:32:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:00.901 04:32:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:00.901 04:32:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 { 00:12:00.901 "subsystems": [ 00:12:00.901 { 00:12:00.901 "subsystem": "bdev", 00:12:00.901 "config": [ 00:12:00.901 { 00:12:00.901 "params": { 00:12:00.901 "block_size": 512, 00:12:00.901 "num_blocks": 2097152, 00:12:00.901 "name": "malloc0" 00:12:00.901 }, 00:12:00.901 "method": "bdev_malloc_create" 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "params": { 00:12:00.901 "io_mechanism": "io_uring", 00:12:00.901 "filename": "/dev/nullb0", 00:12:00.901 "name": "null0" 00:12:00.901 }, 00:12:00.901 "method": "bdev_xnvme_create" 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "method": "bdev_wait_for_examine" 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 [2024-11-03 04:32:23.763311] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:00.901 [2024-11-03 04:32:23.763432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68749 ] 00:12:00.901 [2024-11-03 04:32:23.919237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.160 [2024-11-03 04:32:24.002952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.068  [2024-11-03T04:32:27.086Z] Copying: 290/1024 [MB] (290 MBps) [2024-11-03T04:32:28.020Z] Copying: 603/1024 [MB] (312 MBps) [2024-11-03T04:32:28.279Z] Copying: 915/1024 [MB] (312 MBps) [2024-11-03T04:32:30.181Z] Copying: 1024/1024 [MB] (average 305 MBps) 00:12:07.097 00:12:07.097 04:32:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:07.097 04:32:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:07.097 04:32:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:07.097 04:32:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:07.097 { 00:12:07.097 "subsystems": [ 00:12:07.097 { 00:12:07.097 "subsystem": "bdev", 00:12:07.097 "config": [ 00:12:07.097 { 00:12:07.097 "params": { 00:12:07.097 "block_size": 512, 00:12:07.097 "num_blocks": 2097152, 00:12:07.097 "name": "malloc0" 00:12:07.097 }, 00:12:07.097 "method": "bdev_malloc_create" 00:12:07.097 }, 00:12:07.097 { 00:12:07.097 "params": { 00:12:07.097 "io_mechanism": "io_uring", 00:12:07.097 "filename": "/dev/nullb0", 00:12:07.097 "name": "null0" 00:12:07.097 }, 00:12:07.097 "method": "bdev_xnvme_create" 00:12:07.097 }, 00:12:07.097 { 00:12:07.097 "method": "bdev_wait_for_examine" 00:12:07.097 } 00:12:07.097 ] 00:12:07.097 } 00:12:07.097 ] 00:12:07.097 } 00:12:07.358 [2024-11-03 04:32:30.181863] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:07.358 [2024-11-03 04:32:30.182102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68832 ] 00:12:07.358 [2024-11-03 04:32:30.342535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.620 [2024-11-03 04:32:30.460860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.531  [2024-11-03T04:32:33.620Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-03T04:32:34.554Z] Copying: 575/1024 [MB] (315 MBps) [2024-11-03T04:32:35.120Z] Copying: 891/1024 [MB] (315 MBps) [2024-11-03T04:32:37.024Z] Copying: 1024/1024 [MB] (average 299 MBps) 00:12:13.940 00:12:13.940 04:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:13.940 04:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:13.940 00:12:13.940 real 0m26.721s 00:12:13.940 user 0m23.380s 00:12:13.940 sys 0m2.819s 00:12:13.940 04:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.940 ************************************ 00:12:13.940 END TEST xnvme_to_malloc_dd_copy 00:12:13.940 ************************************ 00:12:13.940 04:32:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.940 04:32:36 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:13.940 04:32:36 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:13.940 04:32:36 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.940 04:32:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.940 ************************************ 00:12:13.940 START TEST xnvme_bdevperf 00:12:13.940 ************************************ 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:13.940 04:32:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:13.940 { 00:12:13.940 "subsystems": [ 00:12:13.940 { 00:12:13.940 "subsystem": "bdev", 00:12:13.940 "config": [ 00:12:13.940 { 00:12:13.940 "params": { 00:12:13.940 "io_mechanism": "libaio", 00:12:13.940 "filename": "/dev/nullb0", 00:12:13.940 "name": "null0" 00:12:13.940 }, 00:12:13.940 "method": "bdev_xnvme_create" 00:12:13.940 }, 00:12:13.940 { 00:12:13.940 "method": "bdev_wait_for_examine" 00:12:13.940 } 00:12:13.940 ] 00:12:13.940 } 00:12:13.940 ] 00:12:13.940 } 00:12:14.199 [2024-11-03 04:32:37.027995] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:14.199 [2024-11-03 04:32:37.028113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68931 ] 00:12:14.199 [2024-11-03 04:32:37.186582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.199 [2024-11-03 04:32:37.272458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.457 Running I/O for 5 seconds... 00:12:16.397 202048.00 IOPS, 789.25 MiB/s [2024-11-03T04:32:40.857Z] 202208.00 IOPS, 789.88 MiB/s [2024-11-03T04:32:41.792Z] 202069.33 IOPS, 789.33 MiB/s [2024-11-03T04:32:42.726Z] 202192.00 IOPS, 789.81 MiB/s 00:12:19.643 Latency(us) 00:12:19.643 [2024-11-03T04:32:42.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.643 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:19.643 null0 : 5.00 202201.16 789.85 0.00 0.00 314.25 111.85 2356.78 00:12:19.643 [2024-11-03T04:32:42.727Z] =================================================================================================================== 00:12:19.643 [2024-11-03T04:32:42.727Z] Total : 202201.16 789.85 0.00 0.00 314.25 111.85 2356.78 00:12:20.209 04:32:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:20.209 04:32:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:20.209 04:32:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:20.209 04:32:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:20.209 04:32:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:20.209 04:32:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:20.209 { 00:12:20.209 "subsystems": [ 00:12:20.209 { 00:12:20.209 "subsystem": "bdev", 00:12:20.209 "config": [ 00:12:20.209 { 00:12:20.209 "params": { 00:12:20.209 "io_mechanism": "io_uring", 00:12:20.209 "filename": "/dev/nullb0", 00:12:20.209 "name": "null0" 00:12:20.209 }, 00:12:20.209 "method": "bdev_xnvme_create" 00:12:20.209 }, 00:12:20.209 { 00:12:20.209 "method": "bdev_wait_for_examine" 00:12:20.209 } 00:12:20.209 ] 00:12:20.209 } 00:12:20.209 ] 00:12:20.209 } 00:12:20.209 [2024-11-03 04:32:43.108108] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:20.209 [2024-11-03 04:32:43.108225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69005 ] 00:12:20.209 [2024-11-03 04:32:43.266188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.466 [2024-11-03 04:32:43.349247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.466 Running I/O for 5 seconds... 00:12:22.775 231872.00 IOPS, 905.75 MiB/s [2024-11-03T04:32:46.794Z] 231776.00 IOPS, 905.38 MiB/s [2024-11-03T04:32:47.728Z] 231765.33 IOPS, 905.33 MiB/s [2024-11-03T04:32:48.663Z] 231760.00 IOPS, 905.31 MiB/s 00:12:25.579 Latency(us) 00:12:25.579 [2024-11-03T04:32:48.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.579 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:25.579 null0 : 5.00 231714.43 905.13 0.00 0.00 273.90 146.51 1518.67 00:12:25.579 [2024-11-03T04:32:48.663Z] =================================================================================================================== 00:12:25.579 [2024-11-03T04:32:48.663Z] Total : 231714.43 905.13 0.00 0.00 273.90 146.51 1518.67 00:12:26.147 04:32:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:26.147 04:32:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:26.147 00:12:26.147 real 0m12.164s 00:12:26.147 user 0m9.773s 00:12:26.147 sys 0m2.170s 00:12:26.147 ************************************ 00:12:26.147 END TEST xnvme_bdevperf 00:12:26.147 ************************************ 00:12:26.147 04:32:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.147 04:32:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 00:12:26.147 real 0m39.157s 00:12:26.147 user 0m33.273s 00:12:26.147 sys 0m5.105s 00:12:26.147 04:32:49 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.147 04:32:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 ************************************ 00:12:26.147 END TEST nvme_xnvme 00:12:26.147 ************************************ 00:12:26.147 04:32:49 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:26.147 04:32:49 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:26.147 04:32:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.147 04:32:49 -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 ************************************ 00:12:26.147 START TEST blockdev_xnvme 00:12:26.147 ************************************ 00:12:26.147 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:26.409 * Looking for test storage... 00:12:26.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.409 04:32:49 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:26.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.409 --rc genhtml_branch_coverage=1 00:12:26.409 --rc genhtml_function_coverage=1 00:12:26.409 --rc genhtml_legend=1 00:12:26.409 --rc geninfo_all_blocks=1 00:12:26.409 --rc geninfo_unexecuted_blocks=1 00:12:26.409 00:12:26.409 ' 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:26.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.409 --rc genhtml_branch_coverage=1 00:12:26.409 --rc genhtml_function_coverage=1 00:12:26.409 --rc genhtml_legend=1 00:12:26.409 --rc geninfo_all_blocks=1 00:12:26.409 --rc geninfo_unexecuted_blocks=1 00:12:26.409 00:12:26.409 ' 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:26.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.409 --rc genhtml_branch_coverage=1 00:12:26.409 --rc genhtml_function_coverage=1 00:12:26.409 --rc genhtml_legend=1 00:12:26.409 --rc geninfo_all_blocks=1 00:12:26.409 --rc geninfo_unexecuted_blocks=1 00:12:26.409 00:12:26.409 ' 00:12:26.409 04:32:49 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:26.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.409 --rc genhtml_branch_coverage=1 00:12:26.409 --rc genhtml_function_coverage=1 00:12:26.409 --rc genhtml_legend=1 00:12:26.409 --rc geninfo_all_blocks=1 00:12:26.409 --rc geninfo_unexecuted_blocks=1 00:12:26.409 00:12:26.409 ' 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:26.409 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69146 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69146 00:12:26.410 04:32:49 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69146 ']' 00:12:26.410 04:32:49 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.410 04:32:49 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.410 04:32:49 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.410 04:32:49 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.410 04:32:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.410 04:32:49 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:26.410 [2024-11-03 04:32:49.469415] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:26.410 [2024-11-03 04:32:49.469578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69146 ] 00:12:26.676 [2024-11-03 04:32:49.629151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.676 [2024-11-03 04:32:49.752073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.622 04:32:50 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.622 04:32:50 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:12:27.622 04:32:50 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:27.622 04:32:50 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:27.622 04:32:50 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:27.622 04:32:50 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:27.622 04:32:50 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:27.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:27.891 Waiting for block devices as requested 00:12:27.891 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:28.198 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:28.198 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:28.198 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:33.470 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:33.470 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:33.470 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:33.471 nvme0n1 00:12:33.471 nvme1n1 00:12:33.471 nvme2n1 00:12:33.471 nvme2n2 00:12:33.471 nvme2n3 00:12:33.471 nvme3n1 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b0d7a463-8394-43ec-96f8-298f7077b67f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b0d7a463-8394-43ec-96f8-298f7077b67f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "4f0c53b8-f52a-4569-8f2a-12bb689dbe13"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4f0c53b8-f52a-4569-8f2a-12bb689dbe13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f75b05a8-6caa-460c-ba9f-8492cee9ee38"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f75b05a8-6caa-460c-ba9f-8492cee9ee38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ef8ab881-c947-4c99-971d-180c61d0c6be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef8ab881-c947-4c99-971d-180c61d0c6be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6a041d5a-b9bb-48d5-bc85-707eca3ece7e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a041d5a-b9bb-48d5-bc85-707eca3ece7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ec754264-7d2e-47a5-b43c-0fad70262644"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ec754264-7d2e-47a5-b43c-0fad70262644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:33.471 04:32:56 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69146 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69146 ']' 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69146 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:33.471 04:32:56 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69146 00:12:33.472 killing process with pid 69146 00:12:33.472 04:32:56 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:33.472 04:32:56 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:33.472 04:32:56 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69146' 00:12:33.472 04:32:56 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69146 00:12:33.472 04:32:56 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69146 00:12:34.849 04:32:57 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:34.849 04:32:57 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:34.849 04:32:57 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:34.849 04:32:57 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:34.849 04:32:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 ************************************ 00:12:34.849 START TEST bdev_hello_world 00:12:34.849 ************************************ 00:12:34.849 04:32:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:34.849 [2024-11-03 04:32:57.795233] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:34.849 [2024-11-03 04:32:57.795351] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69506 ] 00:12:35.107 [2024-11-03 04:32:57.953958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.107 [2024-11-03 04:32:58.031886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.366 [2024-11-03 04:32:58.315276] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:35.366 [2024-11-03 04:32:58.315317] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:35.366 [2024-11-03 04:32:58.315329] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:35.366 [2024-11-03 04:32:58.316821] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:35.366 [2024-11-03 04:32:58.317052] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:35.366 [2024-11-03 04:32:58.317065] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:35.366 [2024-11-03 04:32:58.317300] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:35.366 00:12:35.366 [2024-11-03 04:32:58.317314] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:35.932 ************************************ 00:12:35.932 END TEST bdev_hello_world 00:12:35.932 ************************************ 00:12:35.932 00:12:35.932 real 0m1.141s 00:12:35.932 user 0m0.874s 00:12:35.932 sys 0m0.151s 00:12:35.932 04:32:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.932 04:32:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:35.932 04:32:58 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:35.932 04:32:58 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:35.932 04:32:58 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.932 04:32:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.932 ************************************ 00:12:35.932 START TEST bdev_bounds 00:12:35.932 ************************************ 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69537 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:35.932 Process bdevio pid: 69537 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69537' 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69537 00:12:35.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69537 ']' 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:35.932 04:32:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:35.932 [2024-11-03 04:32:58.980328] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:35.932 [2024-11-03 04:32:58.980554] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:12:36.191 [2024-11-03 04:32:59.131073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.191 [2024-11-03 04:32:59.210825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.191 [2024-11-03 04:32:59.211034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.191 [2024-11-03 04:32:59.211000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.764 04:32:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:36.764 04:32:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:12:36.764 04:32:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:37.028 I/O targets: 00:12:37.028 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:37.028 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:37.028 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:37.028 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:37.028 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:37.028 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:37.028 00:12:37.028 00:12:37.028 CUnit - A unit testing framework for C - Version 2.1-3 00:12:37.028 http://cunit.sourceforge.net/ 00:12:37.028 00:12:37.028 00:12:37.028 Suite: bdevio tests on: nvme3n1 00:12:37.028 Test: blockdev write read block ...passed 00:12:37.028 Test: blockdev write zeroes read block ...passed 00:12:37.029 Test: blockdev write zeroes read no split ...passed 00:12:37.029 Test: blockdev write zeroes read split ...passed 00:12:37.029 Test: blockdev write zeroes read split partial ...passed 00:12:37.029 Test: blockdev reset ...passed 00:12:37.029 Test: blockdev write read 8 blocks ...passed 00:12:37.029 Test: blockdev write read size > 128k ...passed 00:12:37.029 Test: blockdev write read invalid size ...passed 00:12:37.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.029 Test: blockdev write read max offset ...passed 00:12:37.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.029 Test: blockdev writev readv 8 blocks ...passed 00:12:37.029 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.029 Test: blockdev writev readv block ...passed 00:12:37.029 Test: blockdev writev readv size > 128k ...passed 00:12:37.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.029 Test: blockdev comparev and writev ...passed 00:12:37.029 Test: blockdev nvme passthru rw ...passed 00:12:37.029 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.029 Test: blockdev nvme admin passthru ...passed 00:12:37.029 Test: blockdev copy ...passed 00:12:37.029 Suite: bdevio tests on: nvme2n3 00:12:37.029 Test: blockdev write read block ...passed 00:12:37.029 Test: blockdev write zeroes read block ...passed 00:12:37.029 Test: blockdev write zeroes read no split ...passed 00:12:37.029 Test: blockdev write zeroes read split ...passed 00:12:37.029 Test: blockdev write zeroes read split partial ...passed 00:12:37.029 Test: blockdev reset ...passed 00:12:37.029 Test: blockdev write read 8 blocks ...passed 00:12:37.029 Test: blockdev write read size > 128k ...passed 00:12:37.029 Test: blockdev write read invalid size ...passed 00:12:37.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.029 Test: blockdev write read max offset ...passed 00:12:37.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.029 Test: blockdev writev readv 8 blocks ...passed 00:12:37.029 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.029 Test: blockdev writev readv block ...passed 00:12:37.029 Test: blockdev writev readv size > 128k ...passed 00:12:37.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.029 Test: blockdev comparev and writev ...passed 00:12:37.029 Test: blockdev nvme passthru rw ...passed 00:12:37.029 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.029 Test: blockdev nvme admin passthru ...passed 00:12:37.029 Test: blockdev copy ...passed 00:12:37.029 Suite: bdevio tests on: nvme2n2 00:12:37.029 Test: blockdev write read block ...passed 00:12:37.029 Test: blockdev write zeroes read block ...passed 00:12:37.287 Test: blockdev write zeroes read no split ...passed 00:12:37.287 Test: blockdev write zeroes read split ...passed 00:12:37.287 Test: blockdev write zeroes read split partial ...passed 00:12:37.287 Test: blockdev reset ...passed 00:12:37.287 Test: blockdev write read 8 blocks ...passed 00:12:37.287 Test: blockdev write read size > 128k ...passed 00:12:37.287 Test: blockdev write read invalid size ...passed 00:12:37.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.287 Test: blockdev write read max offset ...passed 00:12:37.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.287 Test: blockdev writev readv 8 blocks ...passed 00:12:37.287 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.287 Test: blockdev writev readv block ...passed 00:12:37.287 Test: blockdev writev readv size > 128k ...passed 00:12:37.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.287 Test: blockdev comparev and writev ...passed 00:12:37.287 Test: blockdev nvme passthru rw ...passed 00:12:37.287 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.287 Test: blockdev nvme admin passthru ...passed 00:12:37.287 Test: blockdev copy ...passed 00:12:37.287 Suite: bdevio tests on: nvme2n1 00:12:37.287 Test: blockdev write read block ...passed 00:12:37.287 Test: blockdev write zeroes read block ...passed 00:12:37.287 Test: blockdev write zeroes read no split ...passed 00:12:37.287 Test: blockdev write zeroes read split ...passed 00:12:37.287 Test: blockdev write zeroes read split partial ...passed 00:12:37.287 Test: blockdev reset ...passed 00:12:37.287 Test: blockdev write read 8 blocks ...passed 00:12:37.287 Test: blockdev write read size > 128k ...passed 00:12:37.287 Test: blockdev write read invalid size ...passed 00:12:37.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.287 Test: blockdev write read max offset ...passed 00:12:37.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.287 Test: blockdev writev readv 8 blocks ...passed 00:12:37.287 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.287 Test: blockdev writev readv block ...passed 00:12:37.287 Test: blockdev writev readv size > 128k ...passed 00:12:37.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.287 Test: blockdev comparev and writev ...passed 00:12:37.287 Test: blockdev nvme passthru rw ...passed 00:12:37.287 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.287 Test: blockdev nvme admin passthru ...passed 00:12:37.287 Test: blockdev copy ...passed 00:12:37.287 Suite: bdevio tests on: nvme1n1 00:12:37.287 Test: blockdev write read block ...passed 00:12:37.287 Test: blockdev write zeroes read block ...passed 00:12:37.287 Test: blockdev write zeroes read no split ...passed 00:12:37.287 Test: blockdev write zeroes read split ...passed 00:12:37.287 Test: blockdev write zeroes read split partial ...passed 00:12:37.287 Test: blockdev reset ...passed 00:12:37.287 Test: blockdev write read 8 blocks ...passed 00:12:37.287 Test: blockdev write read size > 128k ...passed 00:12:37.287 Test: blockdev write read invalid size ...passed 00:12:37.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.287 Test: blockdev write read max offset ...passed 00:12:37.287 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.287 Test: blockdev writev readv 8 blocks ...passed 00:12:37.287 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.287 Test: blockdev writev readv block ...passed 00:12:37.287 Test: blockdev writev readv size > 128k ...passed 00:12:37.287 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.287 Test: blockdev comparev and writev ...passed 00:12:37.287 Test: blockdev nvme passthru rw ...passed 00:12:37.287 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.287 Test: blockdev nvme admin passthru ...passed 00:12:37.287 Test: blockdev copy ...passed 00:12:37.287 Suite: bdevio tests on: nvme0n1 00:12:37.287 Test: blockdev write read block ...passed 00:12:37.287 Test: blockdev write zeroes read block ...passed 00:12:37.287 Test: blockdev write zeroes read no split ...passed 00:12:37.546 Test: blockdev write zeroes read split ...passed 00:12:37.546 Test: blockdev write zeroes read split partial ...passed 00:12:37.546 Test: blockdev reset ...passed 00:12:37.546 Test: blockdev write read 8 blocks ...passed 00:12:37.546 Test: blockdev write read size > 128k ...passed 00:12:37.546 Test: blockdev write read invalid size ...passed 00:12:37.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.546 Test: blockdev write read max offset ...passed 00:12:37.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.546 Test: blockdev writev readv 8 blocks ...passed 00:12:37.546 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.546 Test: blockdev writev readv block ...passed 00:12:37.546 Test: blockdev writev readv size > 128k ...passed 00:12:37.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.546 Test: blockdev comparev and writev ...passed 00:12:37.546 Test: blockdev nvme passthru rw ...passed 00:12:37.546 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.546 Test: blockdev nvme admin passthru ...passed 00:12:37.546 Test: blockdev copy ...passed 00:12:37.546 00:12:37.546 Run Summary: Type Total Ran Passed Failed Inactive 00:12:37.546 suites 6 6 n/a 0 0 00:12:37.546 tests 138 138 138 0 0 00:12:37.546 asserts 780 780 780 0 n/a 00:12:37.546 00:12:37.546 Elapsed time = 1.287 seconds 00:12:37.546 0 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69537 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69537 ']' 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69537 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69537 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69537' 00:12:37.546 killing process with pid 69537 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69537 00:12:37.546 04:33:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69537 00:12:38.114 04:33:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:38.114 00:12:38.114 real 0m2.248s 00:12:38.114 user 0m5.692s 00:12:38.114 sys 0m0.276s 00:12:38.114 04:33:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.114 ************************************ 00:12:38.114 END TEST bdev_bounds 00:12:38.114 ************************************ 00:12:38.114 04:33:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:38.375 04:33:01 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:38.375 04:33:01 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:38.375 04:33:01 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.375 04:33:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.375 ************************************ 00:12:38.375 START TEST bdev_nbd 00:12:38.375 ************************************ 00:12:38.375 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:38.375 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69597 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69597 /var/tmp/spdk-nbd.sock 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69597 ']' 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:38.376 04:33:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:38.376 [2024-11-03 04:33:01.323781] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:38.376 [2024-11-03 04:33:01.323924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.636 [2024-11-03 04:33:01.481337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.636 [2024-11-03 04:33:01.601658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.208 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.470 1+0 records in 00:12:39.470 1+0 records out 00:12:39.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134005 s, 3.1 MB/s 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.470 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.731 1+0 records in 00:12:39.731 1+0 records out 00:12:39.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974735 s, 4.2 MB/s 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.731 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.992 1+0 records in 00:12:39.992 1+0 records out 00:12:39.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102062 s, 4.0 MB/s 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.992 04:33:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:40.253 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.254 1+0 records in 00:12:40.254 1+0 records out 00:12:40.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907805 s, 4.5 MB/s 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:40.254 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.515 1+0 records in 00:12:40.515 1+0 records out 00:12:40.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107208 s, 3.8 MB/s 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:40.515 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.776 1+0 records in 00:12:40.776 1+0 records out 00:12:40.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137359 s, 3.0 MB/s 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:40.776 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd0", 00:12:41.037 "bdev_name": "nvme0n1" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd1", 00:12:41.037 "bdev_name": "nvme1n1" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd2", 00:12:41.037 "bdev_name": "nvme2n1" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd3", 00:12:41.037 "bdev_name": "nvme2n2" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd4", 00:12:41.037 "bdev_name": "nvme2n3" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd5", 00:12:41.037 "bdev_name": "nvme3n1" 00:12:41.037 } 00:12:41.037 ]' 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd0", 00:12:41.037 "bdev_name": "nvme0n1" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd1", 00:12:41.037 "bdev_name": "nvme1n1" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd2", 00:12:41.037 "bdev_name": "nvme2n1" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd3", 00:12:41.037 "bdev_name": "nvme2n2" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd4", 00:12:41.037 "bdev_name": "nvme2n3" 00:12:41.037 }, 00:12:41.037 { 00:12:41.037 "nbd_device": "/dev/nbd5", 00:12:41.037 "bdev_name": "nvme3n1" 00:12:41.037 } 00:12:41.037 ]' 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.037 04:33:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.298 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.559 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:41.818 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:41.818 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.819 04:33:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.079 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.080 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.341 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:42.601 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:42.862 /dev/nbd0 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.862 1+0 records in 00:12:42.862 1+0 records out 00:12:42.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793153 s, 5.2 MB/s 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:42.862 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.863 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:42.863 04:33:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:42.863 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.863 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:42.863 04:33:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:43.122 /dev/nbd1 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.122 1+0 records in 00:12:43.122 1+0 records out 00:12:43.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159847 s, 2.6 MB/s 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:43.122 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.123 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.123 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:43.383 /dev/nbd10 00:12:43.383 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:43.383 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:43.383 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.384 1+0 records in 00:12:43.384 1+0 records out 00:12:43.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112841 s, 3.6 MB/s 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.384 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:43.644 /dev/nbd11 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.644 1+0 records in 00:12:43.644 1+0 records out 00:12:43.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968642 s, 4.2 MB/s 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.644 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:43.906 /dev/nbd12 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.906 1+0 records in 00:12:43.906 1+0 records out 00:12:43.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005019 s, 8.2 MB/s 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.906 04:33:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:43.906 /dev/nbd13 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.167 1+0 records in 00:12:44.167 1+0 records out 00:12:44.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136459 s, 3.0 MB/s 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd0", 00:12:44.167 "bdev_name": "nvme0n1" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd1", 00:12:44.167 "bdev_name": "nvme1n1" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd10", 00:12:44.167 "bdev_name": "nvme2n1" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd11", 00:12:44.167 "bdev_name": "nvme2n2" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd12", 00:12:44.167 "bdev_name": "nvme2n3" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd13", 00:12:44.167 "bdev_name": "nvme3n1" 00:12:44.167 } 00:12:44.167 ]' 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.167 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd0", 00:12:44.167 "bdev_name": "nvme0n1" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd1", 00:12:44.167 "bdev_name": "nvme1n1" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd10", 00:12:44.167 "bdev_name": "nvme2n1" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd11", 00:12:44.167 "bdev_name": "nvme2n2" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd12", 00:12:44.167 "bdev_name": "nvme2n3" 00:12:44.167 }, 00:12:44.167 { 00:12:44.167 "nbd_device": "/dev/nbd13", 00:12:44.167 "bdev_name": "nvme3n1" 00:12:44.167 } 00:12:44.167 ]' 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:44.428 /dev/nbd1 00:12:44.428 /dev/nbd10 00:12:44.428 /dev/nbd11 00:12:44.428 /dev/nbd12 00:12:44.428 /dev/nbd13' 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:44.428 /dev/nbd1 00:12:44.428 /dev/nbd10 00:12:44.428 /dev/nbd11 00:12:44.428 /dev/nbd12 00:12:44.428 /dev/nbd13' 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:44.428 256+0 records in 00:12:44.428 256+0 records out 00:12:44.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734272 s, 143 MB/s 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:44.428 256+0 records in 00:12:44.428 256+0 records out 00:12:44.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175475 s, 6.0 MB/s 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.428 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:44.689 256+0 records in 00:12:44.690 256+0 records out 00:12:44.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.27009 s, 3.9 MB/s 00:12:44.690 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.690 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:44.992 256+0 records in 00:12:44.992 256+0 records out 00:12:44.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140001 s, 7.5 MB/s 00:12:44.992 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.992 04:33:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:45.252 256+0 records in 00:12:45.252 256+0 records out 00:12:45.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229939 s, 4.6 MB/s 00:12:45.252 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.252 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:45.252 256+0 records in 00:12:45.252 256+0 records out 00:12:45.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178186 s, 5.9 MB/s 00:12:45.252 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.252 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:45.512 256+0 records in 00:12:45.512 256+0 records out 00:12:45.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203074 s, 5.2 MB/s 00:12:45.512 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:45.512 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:45.512 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:45.512 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:45.512 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.513 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.774 04:33:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.036 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.295 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.553 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:46.811 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.068 04:33:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:47.069 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:47.328 malloc_lvol_verify 00:12:47.328 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:47.587 389dac7b-176e-4fff-8210-8b749c2d6d5a 00:12:47.587 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:47.847 3857e5e6-54b5-4ca0-9d69-6fcd2b29cb04 00:12:47.847 04:33:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:48.105 /dev/nbd0 00:12:48.105 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:48.105 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:48.105 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:48.105 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:48.105 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:48.105 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:12:48.105 done 00:12:48.105 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:48.105 00:12:48.105 Allocating group tables: 0/1 done 00:12:48.106 Writing inode tables: 0/1 done 00:12:48.106 Creating journal (1024 blocks): done 00:12:48.106 Writing superblocks and filesystem accounting information: 0/1 done 00:12:48.106 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.106 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69597 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69597 ']' 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69597 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69597 00:12:48.366 killing process with pid 69597 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69597' 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69597 00:12:48.366 04:33:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69597 00:12:49.308 ************************************ 00:12:49.308 END TEST bdev_nbd 00:12:49.308 ************************************ 00:12:49.308 04:33:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:49.308 00:12:49.308 real 0m10.814s 00:12:49.308 user 0m14.603s 00:12:49.308 sys 0m3.717s 00:12:49.308 04:33:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.308 04:33:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:49.308 04:33:12 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:49.308 04:33:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:49.308 04:33:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:49.308 04:33:12 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:49.308 04:33:12 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:49.308 04:33:12 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.308 04:33:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.308 ************************************ 00:12:49.308 START TEST bdev_fio 00:12:49.308 ************************************ 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:49.308 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:12:49.308 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:49.309 ************************************ 00:12:49.309 START TEST bdev_fio_rw_verify 00:12:49.309 ************************************ 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:49.309 04:33:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:49.568 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:49.568 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:49.568 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:49.568 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:49.568 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:49.568 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:49.568 fio-3.35 00:12:49.568 Starting 6 threads 00:13:01.804 00:13:01.804 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69999: Sun Nov 3 04:33:23 2024 00:13:01.804 read: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(495MiB/10003msec) 00:13:01.804 slat (usec): min=2, max=1948, avg= 6.65, stdev=14.36 00:13:01.804 clat (usec): min=95, max=42957, avg=1561.26, stdev=917.03 00:13:01.804 lat (usec): min=98, max=42978, avg=1567.92, stdev=917.39 00:13:01.804 clat percentiles (usec): 00:13:01.804 | 50.000th=[ 1450], 99.000th=[ 4228], 99.900th=[ 5735], 99.990th=[ 8455], 00:13:01.804 | 99.999th=[42730] 00:13:01.804 write: IOPS=12.9k, BW=50.5MiB/s (52.9MB/s)(505MiB/10003msec); 0 zone resets 00:13:01.804 slat (usec): min=12, max=6961, avg=45.52, stdev=167.40 00:13:01.804 clat (usec): min=87, max=10782, avg=1834.07, stdev=971.89 00:13:01.804 lat (usec): min=110, max=10811, avg=1879.59, stdev=986.83 00:13:01.804 clat percentiles (usec): 00:13:01.804 | 50.000th=[ 1680], 99.000th=[ 4883], 99.900th=[ 6783], 99.990th=[ 8455], 00:13:01.804 | 99.999th=[10814] 00:13:01.804 bw ( KiB/s): min=41308, max=76803, per=100.00%, avg=51893.58, stdev=1721.27, samples=114 00:13:01.804 iops : min=10325, max=19200, avg=12971.89, stdev=430.37, samples=114 00:13:01.804 lat (usec) : 100=0.01%, 250=1.25%, 500=4.99%, 750=7.13%, 1000=9.44% 00:13:01.804 lat (msec) : 2=46.47%, 4=28.45%, 10=2.26%, 20=0.01%, 50=0.01% 00:13:01.804 cpu : usr=44.06%, sys=32.53%, ctx=4995, majf=0, minf=13434 00:13:01.804 IO depths : 1=11.3%, 2=23.7%, 4=51.3%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.804 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.804 issued rwts: total=126847,129228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.804 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:01.804 00:13:01.804 Run status group 0 (all jobs): 00:13:01.804 READ: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=495MiB (520MB), run=10003-10003msec 00:13:01.804 WRITE: bw=50.5MiB/s (52.9MB/s), 50.5MiB/s-50.5MiB/s (52.9MB/s-52.9MB/s), io=505MiB (529MB), run=10003-10003msec 00:13:01.804 ----------------------------------------------------- 00:13:01.804 Suppressions used: 00:13:01.804 count bytes template 00:13:01.804 6 48 /usr/src/fio/parse.c 00:13:01.804 2304 221184 /usr/src/fio/iolog.c 00:13:01.804 1 8 libtcmalloc_minimal.so 00:13:01.804 1 904 libcrypto.so 00:13:01.804 ----------------------------------------------------- 00:13:01.804 00:13:01.804 00:13:01.804 real 0m11.943s 00:13:01.804 user 0m27.943s 00:13:01.804 sys 0m19.842s 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.804 ************************************ 00:13:01.804 END TEST bdev_fio_rw_verify 00:13:01.804 ************************************ 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:01.804 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b0d7a463-8394-43ec-96f8-298f7077b67f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b0d7a463-8394-43ec-96f8-298f7077b67f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "4f0c53b8-f52a-4569-8f2a-12bb689dbe13"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4f0c53b8-f52a-4569-8f2a-12bb689dbe13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f75b05a8-6caa-460c-ba9f-8492cee9ee38"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f75b05a8-6caa-460c-ba9f-8492cee9ee38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ef8ab881-c947-4c99-971d-180c61d0c6be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef8ab881-c947-4c99-971d-180c61d0c6be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6a041d5a-b9bb-48d5-bc85-707eca3ece7e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a041d5a-b9bb-48d5-bc85-707eca3ece7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ec754264-7d2e-47a5-b43c-0fad70262644"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ec754264-7d2e-47a5-b43c-0fad70262644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.805 /home/vagrant/spdk_repo/spdk 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:01.805 00:13:01.805 real 0m12.123s 00:13:01.805 user 0m28.018s 00:13:01.805 sys 0m19.927s 00:13:01.805 ************************************ 00:13:01.805 END TEST bdev_fio 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.805 04:33:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:01.805 ************************************ 00:13:01.805 04:33:24 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:01.805 04:33:24 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:01.805 04:33:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:01.805 04:33:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.805 04:33:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.805 ************************************ 00:13:01.805 START TEST bdev_verify 00:13:01.805 ************************************ 00:13:01.805 04:33:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:01.805 [2024-11-03 04:33:24.401466] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:01.805 [2024-11-03 04:33:24.401737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70173 ] 00:13:01.805 [2024-11-03 04:33:24.568862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:01.805 [2024-11-03 04:33:24.691290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.805 [2024-11-03 04:33:24.691387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.066 Running I/O for 5 seconds... 00:13:04.398 23136.00 IOPS, 90.38 MiB/s [2024-11-03T04:33:28.423Z] 22800.00 IOPS, 89.06 MiB/s [2024-11-03T04:33:29.368Z] 22773.33 IOPS, 88.96 MiB/s [2024-11-03T04:33:30.310Z] 23544.00 IOPS, 91.97 MiB/s [2024-11-03T04:33:30.310Z] 23897.60 IOPS, 93.35 MiB/s 00:13:07.226 Latency(us) 00:13:07.226 [2024-11-03T04:33:30.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.226 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.226 Verification LBA range: start 0x0 length 0xa0000 00:13:07.226 nvme0n1 : 5.05 2003.97 7.83 0.00 0.00 63756.83 11342.77 67350.84 00:13:07.226 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.226 Verification LBA range: start 0xa0000 length 0xa0000 00:13:07.226 nvme0n1 : 5.03 1757.41 6.86 0.00 0.00 72697.77 7914.73 72593.72 00:13:07.226 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.226 Verification LBA range: start 0x0 length 0xbd0bd 00:13:07.226 nvme1n1 : 5.05 2406.35 9.40 0.00 0.00 52778.10 6402.36 58881.58 00:13:07.226 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.226 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:07.226 nvme1n1 : 5.05 2242.41 8.76 0.00 0.00 56728.29 5923.45 66140.95 00:13:07.226 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.226 Verification LBA range: start 0x0 length 0x80000 00:13:07.226 nvme2n1 : 5.06 2049.22 8.00 0.00 0.00 61960.98 8166.79 73400.32 00:13:07.226 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.226 Verification LBA range: start 0x80000 length 0x80000 00:13:07.226 nvme2n1 : 5.07 1817.89 7.10 0.00 0.00 69895.99 4436.28 67350.84 00:13:07.226 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.227 Verification LBA range: start 0x0 length 0x80000 00:13:07.227 nvme2n2 : 5.05 2002.81 7.82 0.00 0.00 63278.40 10536.17 68157.44 00:13:07.227 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.227 Verification LBA range: start 0x80000 length 0x80000 00:13:07.227 nvme2n2 : 5.05 1747.66 6.83 0.00 0.00 72561.35 14216.27 67350.84 00:13:07.227 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.227 Verification LBA range: start 0x0 length 0x80000 00:13:07.227 nvme2n3 : 5.06 2022.97 7.90 0.00 0.00 62534.85 8418.86 67754.14 00:13:07.227 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.227 Verification LBA range: start 0x80000 length 0x80000 00:13:07.227 nvme2n3 : 5.07 1765.66 6.90 0.00 0.00 71649.71 5847.83 66140.95 00:13:07.227 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.227 Verification LBA range: start 0x0 length 0x20000 00:13:07.227 nvme3n1 : 5.07 2043.40 7.98 0.00 0.00 61809.31 1739.22 66947.54 00:13:07.227 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.227 Verification LBA range: start 0x20000 length 0x20000 00:13:07.227 nvme3n1 : 5.07 1766.76 6.90 0.00 0.00 71444.99 5041.23 66544.25 00:13:07.227 [2024-11-03T04:33:30.311Z] =================================================================================================================== 00:13:07.227 [2024-11-03T04:33:30.311Z] Total : 23626.49 92.29 0.00 0.00 64456.73 1739.22 73400.32 00:13:08.171 00:13:08.171 real 0m6.821s 00:13:08.171 user 0m11.028s 00:13:08.171 sys 0m1.422s 00:13:08.171 04:33:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.171 04:33:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:08.171 ************************************ 00:13:08.171 END TEST bdev_verify 00:13:08.171 ************************************ 00:13:08.171 04:33:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:08.171 04:33:31 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:08.171 04:33:31 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.171 04:33:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.171 ************************************ 00:13:08.171 START TEST bdev_verify_big_io 00:13:08.171 ************************************ 00:13:08.171 04:33:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:08.432 [2024-11-03 04:33:31.289482] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:08.432 [2024-11-03 04:33:31.289649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70271 ] 00:13:08.432 [2024-11-03 04:33:31.452101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.693 [2024-11-03 04:33:31.587767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.693 [2024-11-03 04:33:31.587895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.307 Running I/O for 5 seconds... 00:13:15.414 1730.00 IOPS, 108.12 MiB/s [2024-11-03T04:33:38.759Z] 3088.50 IOPS, 193.03 MiB/s 00:13:15.675 Latency(us) 00:13:15.675 [2024-11-03T04:33:38.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.675 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x0 length 0xa000 00:13:15.675 nvme0n1 : 5.78 143.96 9.00 0.00 0.00 854626.61 7864.32 916294.10 00:13:15.675 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0xa000 length 0xa000 00:13:15.675 nvme0n1 : 5.87 98.18 6.14 0.00 0.00 1218004.76 261337.40 1606741.07 00:13:15.675 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x0 length 0xbd0b 00:13:15.675 nvme1n1 : 5.86 159.62 9.98 0.00 0.00 753962.17 10384.94 825955.25 00:13:15.675 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:15.675 nvme1n1 : 5.96 93.98 5.87 0.00 0.00 1223765.07 10435.35 1419610.58 00:13:15.675 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x0 length 0x8000 00:13:15.675 nvme2n1 : 5.78 127.31 7.96 0.00 0.00 913092.92 87112.47 1716438.25 00:13:15.675 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x8000 length 0x8000 00:13:15.675 nvme2n1 : 5.96 106.03 6.63 0.00 0.00 1045408.32 40934.79 1329271.73 00:13:15.675 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x0 length 0x8000 00:13:15.675 nvme2n2 : 5.87 174.57 10.91 0.00 0.00 650680.71 79853.10 667862.25 00:13:15.675 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x8000 length 0x8000 00:13:15.675 nvme2n2 : 6.02 106.35 6.65 0.00 0.00 997598.13 28029.24 1271196.75 00:13:15.675 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x0 length 0x8000 00:13:15.675 nvme2n3 : 5.88 116.98 7.31 0.00 0.00 956391.78 11443.59 2026171.47 00:13:15.675 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x8000 length 0x8000 00:13:15.675 nvme2n3 : 6.20 149.79 9.36 0.00 0.00 679622.53 6805.66 2632732.36 00:13:15.675 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x0 length 0x2000 00:13:15.675 nvme3n1 : 5.88 206.69 12.92 0.00 0.00 529743.64 5066.44 571070.62 00:13:15.675 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.675 Verification LBA range: start 0x2000 length 0x2000 00:13:15.675 nvme3n1 : 6.37 226.88 14.18 0.00 0.00 431854.01 201.65 2219754.73 00:13:15.675 [2024-11-03T04:33:38.759Z] =================================================================================================================== 00:13:15.675 [2024-11-03T04:33:38.759Z] Total : 1710.33 106.90 0.00 0.00 783121.12 201.65 2632732.36 00:13:16.274 00:13:16.274 real 0m8.086s 00:13:16.274 user 0m14.794s 00:13:16.274 sys 0m0.528s 00:13:16.535 04:33:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.535 ************************************ 00:13:16.535 END TEST bdev_verify_big_io 00:13:16.535 ************************************ 00:13:16.535 04:33:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.535 04:33:39 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.535 04:33:39 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:16.535 04:33:39 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.535 04:33:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.535 ************************************ 00:13:16.535 START TEST bdev_write_zeroes 00:13:16.535 ************************************ 00:13:16.535 04:33:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.535 [2024-11-03 04:33:39.465090] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:16.535 [2024-11-03 04:33:39.465190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70381 ] 00:13:16.535 [2024-11-03 04:33:39.610099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.800 [2024-11-03 04:33:39.702364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.059 Running I/O for 1 seconds... 00:13:18.000 85664.00 IOPS, 334.62 MiB/s 00:13:18.000 Latency(us) 00:13:18.000 [2024-11-03T04:33:41.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.000 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:18.000 nvme0n1 : 1.03 13854.15 54.12 0.00 0.00 9230.31 5494.94 21173.17 00:13:18.000 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:18.000 nvme1n1 : 1.02 15673.87 61.23 0.00 0.00 8152.66 3831.34 17543.48 00:13:18.000 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:18.000 nvme2n1 : 1.02 13874.49 54.20 0.00 0.00 9170.21 4310.25 25811.10 00:13:18.000 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:18.000 nvme2n2 : 1.03 13837.42 54.05 0.00 0.00 9185.97 4259.84 20769.87 00:13:18.000 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:18.000 nvme2n3 : 1.03 13821.95 53.99 0.00 0.00 9190.58 4360.66 20769.87 00:13:18.000 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:18.000 nvme3n1 : 1.03 13806.31 53.93 0.00 0.00 9194.50 4411.08 20265.75 00:13:18.000 [2024-11-03T04:33:41.085Z] =================================================================================================================== 00:13:18.001 [2024-11-03T04:33:41.085Z] Total : 84868.18 331.52 0.00 0.00 9002.52 3831.34 25811.10 00:13:18.942 00:13:18.942 real 0m2.510s 00:13:18.942 user 0m1.894s 00:13:18.942 sys 0m0.439s 00:13:18.942 ************************************ 00:13:18.942 END TEST bdev_write_zeroes 00:13:18.942 04:33:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:18.942 04:33:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:18.942 ************************************ 00:13:18.942 04:33:41 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:18.942 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:18.942 04:33:41 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:18.942 04:33:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.942 ************************************ 00:13:18.942 START TEST bdev_json_nonenclosed 00:13:18.942 ************************************ 00:13:18.942 04:33:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.203 [2024-11-03 04:33:42.061883] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:19.203 [2024-11-03 04:33:42.062025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70434 ] 00:13:19.203 [2024-11-03 04:33:42.225283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.465 [2024-11-03 04:33:42.358606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.465 [2024-11-03 04:33:42.358733] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:19.465 [2024-11-03 04:33:42.358756] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:19.465 [2024-11-03 04:33:42.358768] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:19.726 00:13:19.726 real 0m0.572s 00:13:19.726 user 0m0.345s 00:13:19.726 sys 0m0.120s 00:13:19.726 04:33:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.726 ************************************ 00:13:19.726 END TEST bdev_json_nonenclosed 00:13:19.726 ************************************ 00:13:19.726 04:33:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:19.726 04:33:42 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.726 04:33:42 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:19.726 04:33:42 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.726 04:33:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.726 ************************************ 00:13:19.726 START TEST bdev_json_nonarray 00:13:19.726 ************************************ 00:13:19.726 04:33:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.726 [2024-11-03 04:33:42.701814] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:19.726 [2024-11-03 04:33:42.701943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70454 ] 00:13:19.987 [2024-11-03 04:33:42.867375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.987 [2024-11-03 04:33:42.999096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.988 [2024-11-03 04:33:42.999231] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:19.988 [2024-11-03 04:33:42.999253] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:19.988 [2024-11-03 04:33:42.999265] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:20.249 00:13:20.249 real 0m0.573s 00:13:20.249 user 0m0.349s 00:13:20.249 sys 0m0.116s 00:13:20.249 04:33:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.249 ************************************ 00:13:20.249 END TEST bdev_json_nonarray 00:13:20.249 ************************************ 00:13:20.249 04:33:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:20.249 04:33:43 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:20.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.126 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.126 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.126 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.126 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.126 00:13:24.126 real 0m57.822s 00:13:24.126 user 1m26.741s 00:13:24.126 sys 0m32.707s 00:13:24.126 ************************************ 00:13:24.126 END TEST blockdev_xnvme 00:13:24.126 ************************************ 00:13:24.126 04:33:47 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:24.126 04:33:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.126 04:33:47 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:24.126 04:33:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:24.126 04:33:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.126 04:33:47 -- common/autotest_common.sh@10 -- # set +x 00:13:24.126 ************************************ 00:13:24.126 START TEST ublk 00:13:24.126 ************************************ 00:13:24.126 04:33:47 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:24.126 * Looking for test storage... 00:13:24.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:24.126 04:33:47 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:24.126 04:33:47 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:24.126 04:33:47 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:24.388 04:33:47 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.388 04:33:47 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.388 04:33:47 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.388 04:33:47 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.388 04:33:47 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.388 04:33:47 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.388 04:33:47 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:24.388 04:33:47 ublk -- scripts/common.sh@345 -- # : 1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.388 04:33:47 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.388 04:33:47 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@353 -- # local d=1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.388 04:33:47 ublk -- scripts/common.sh@355 -- # echo 1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.388 04:33:47 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@353 -- # local d=2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.388 04:33:47 ublk -- scripts/common.sh@355 -- # echo 2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.388 04:33:47 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.388 04:33:47 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.388 04:33:47 ublk -- scripts/common.sh@368 -- # return 0 00:13:24.388 04:33:47 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.388 04:33:47 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.388 --rc genhtml_branch_coverage=1 00:13:24.388 --rc genhtml_function_coverage=1 00:13:24.388 --rc genhtml_legend=1 00:13:24.388 --rc geninfo_all_blocks=1 00:13:24.388 --rc geninfo_unexecuted_blocks=1 00:13:24.388 00:13:24.388 ' 00:13:24.388 04:33:47 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.388 --rc genhtml_branch_coverage=1 00:13:24.388 --rc genhtml_function_coverage=1 00:13:24.388 --rc genhtml_legend=1 00:13:24.388 --rc geninfo_all_blocks=1 00:13:24.388 --rc geninfo_unexecuted_blocks=1 00:13:24.388 00:13:24.388 ' 00:13:24.388 04:33:47 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.388 --rc genhtml_branch_coverage=1 00:13:24.388 --rc genhtml_function_coverage=1 00:13:24.388 --rc genhtml_legend=1 00:13:24.388 --rc geninfo_all_blocks=1 00:13:24.388 --rc geninfo_unexecuted_blocks=1 00:13:24.388 00:13:24.388 ' 00:13:24.388 04:33:47 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.388 --rc genhtml_branch_coverage=1 00:13:24.388 --rc genhtml_function_coverage=1 00:13:24.388 --rc genhtml_legend=1 00:13:24.388 --rc geninfo_all_blocks=1 00:13:24.388 --rc geninfo_unexecuted_blocks=1 00:13:24.388 00:13:24.388 ' 00:13:24.388 04:33:47 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:24.388 04:33:47 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:24.388 04:33:47 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:24.388 04:33:47 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:24.388 04:33:47 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:24.388 04:33:47 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:24.388 04:33:47 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:24.388 04:33:47 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:24.388 04:33:47 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:24.389 04:33:47 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:24.389 04:33:47 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:24.389 04:33:47 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.389 04:33:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:24.389 ************************************ 00:13:24.389 START TEST test_save_ublk_config 00:13:24.389 ************************************ 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70751 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70751 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70751 ']' 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.389 04:33:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:24.389 [2024-11-03 04:33:47.385443] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:24.389 [2024-11-03 04:33:47.385611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70751 ] 00:13:24.650 [2024-11-03 04:33:47.550597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.650 [2024-11-03 04:33:47.673600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 [2024-11-03 04:33:48.400585] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:25.593 [2024-11-03 04:33:48.401509] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:25.593 malloc0 00:13:25.593 [2024-11-03 04:33:48.472740] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:25.593 [2024-11-03 04:33:48.472838] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:25.593 [2024-11-03 04:33:48.472849] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:25.593 [2024-11-03 04:33:48.472858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.593 [2024-11-03 04:33:48.481699] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.593 [2024-11-03 04:33:48.481731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.593 [2024-11-03 04:33:48.488605] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.593 [2024-11-03 04:33:48.488748] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:25.593 [2024-11-03 04:33:48.504922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.593 0 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.593 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:25.854 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.854 04:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:25.854 "subsystems": [ 00:13:25.854 { 00:13:25.854 "subsystem": "fsdev", 00:13:25.854 "config": [ 00:13:25.854 { 00:13:25.854 "method": "fsdev_set_opts", 00:13:25.854 "params": { 00:13:25.854 "fsdev_io_pool_size": 65535, 00:13:25.854 "fsdev_io_cache_size": 256 00:13:25.854 } 00:13:25.854 } 00:13:25.854 ] 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "subsystem": "keyring", 00:13:25.854 "config": [] 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "subsystem": "iobuf", 00:13:25.854 "config": [ 00:13:25.854 { 00:13:25.854 "method": "iobuf_set_options", 00:13:25.854 "params": { 00:13:25.854 "small_pool_count": 8192, 00:13:25.854 "large_pool_count": 1024, 00:13:25.854 "small_bufsize": 8192, 00:13:25.854 "large_bufsize": 135168, 00:13:25.854 "enable_numa": false 00:13:25.854 } 00:13:25.854 } 00:13:25.854 ] 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "subsystem": "sock", 00:13:25.854 "config": [ 00:13:25.854 { 00:13:25.854 "method": "sock_set_default_impl", 00:13:25.854 "params": { 00:13:25.854 "impl_name": "posix" 00:13:25.854 } 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "method": "sock_impl_set_options", 00:13:25.854 "params": { 00:13:25.854 "impl_name": "ssl", 00:13:25.854 "recv_buf_size": 4096, 00:13:25.854 "send_buf_size": 4096, 00:13:25.854 "enable_recv_pipe": true, 00:13:25.854 "enable_quickack": false, 00:13:25.854 "enable_placement_id": 0, 00:13:25.854 "enable_zerocopy_send_server": true, 00:13:25.854 "enable_zerocopy_send_client": false, 00:13:25.854 "zerocopy_threshold": 0, 00:13:25.854 "tls_version": 0, 00:13:25.854 "enable_ktls": false 00:13:25.854 } 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "method": "sock_impl_set_options", 00:13:25.854 "params": { 00:13:25.854 "impl_name": "posix", 00:13:25.854 "recv_buf_size": 2097152, 00:13:25.854 "send_buf_size": 2097152, 00:13:25.854 "enable_recv_pipe": true, 00:13:25.854 "enable_quickack": false, 00:13:25.854 "enable_placement_id": 0, 00:13:25.854 "enable_zerocopy_send_server": true, 00:13:25.854 "enable_zerocopy_send_client": false, 00:13:25.854 "zerocopy_threshold": 0, 00:13:25.854 "tls_version": 0, 00:13:25.854 "enable_ktls": false 00:13:25.854 } 00:13:25.854 } 00:13:25.854 ] 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "subsystem": "vmd", 00:13:25.854 "config": [] 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "subsystem": "accel", 00:13:25.854 "config": [ 00:13:25.854 { 00:13:25.854 "method": "accel_set_options", 00:13:25.854 "params": { 00:13:25.854 "small_cache_size": 128, 00:13:25.854 "large_cache_size": 16, 00:13:25.854 "task_count": 2048, 00:13:25.854 "sequence_count": 2048, 00:13:25.854 "buf_count": 2048 00:13:25.854 } 00:13:25.854 } 00:13:25.854 ] 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "subsystem": "bdev", 00:13:25.854 "config": [ 00:13:25.854 { 00:13:25.854 "method": "bdev_set_options", 00:13:25.854 "params": { 00:13:25.854 "bdev_io_pool_size": 65535, 00:13:25.854 "bdev_io_cache_size": 256, 00:13:25.854 "bdev_auto_examine": true, 00:13:25.854 "iobuf_small_cache_size": 128, 00:13:25.854 "iobuf_large_cache_size": 16 00:13:25.854 } 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "method": "bdev_raid_set_options", 00:13:25.854 "params": { 00:13:25.854 "process_window_size_kb": 1024, 00:13:25.854 "process_max_bandwidth_mb_sec": 0 00:13:25.854 } 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "method": "bdev_iscsi_set_options", 00:13:25.854 "params": { 00:13:25.854 "timeout_sec": 30 00:13:25.854 } 00:13:25.854 }, 00:13:25.854 { 00:13:25.854 "method": "bdev_nvme_set_options", 00:13:25.854 "params": { 00:13:25.854 "action_on_timeout": "none", 00:13:25.854 "timeout_us": 0, 00:13:25.854 "timeout_admin_us": 0, 00:13:25.854 "keep_alive_timeout_ms": 10000, 00:13:25.854 "arbitration_burst": 0, 00:13:25.854 "low_priority_weight": 0, 00:13:25.854 "medium_priority_weight": 0, 00:13:25.854 "high_priority_weight": 0, 00:13:25.854 "nvme_adminq_poll_period_us": 10000, 00:13:25.854 "nvme_ioq_poll_period_us": 0, 00:13:25.854 "io_queue_requests": 0, 00:13:25.854 "delay_cmd_submit": true, 00:13:25.854 "transport_retry_count": 4, 00:13:25.854 "bdev_retry_count": 3, 00:13:25.854 "transport_ack_timeout": 0, 00:13:25.854 "ctrlr_loss_timeout_sec": 0, 00:13:25.854 "reconnect_delay_sec": 0, 00:13:25.854 "fast_io_fail_timeout_sec": 0, 00:13:25.854 "disable_auto_failback": false, 00:13:25.854 "generate_uuids": false, 00:13:25.854 "transport_tos": 0, 00:13:25.854 "nvme_error_stat": false, 00:13:25.854 "rdma_srq_size": 0, 00:13:25.854 "io_path_stat": false, 00:13:25.854 "allow_accel_sequence": false, 00:13:25.854 "rdma_max_cq_size": 0, 00:13:25.854 "rdma_cm_event_timeout_ms": 0, 00:13:25.854 "dhchap_digests": [ 00:13:25.854 "sha256", 00:13:25.854 "sha384", 00:13:25.854 "sha512" 00:13:25.854 ], 00:13:25.855 "dhchap_dhgroups": [ 00:13:25.855 "null", 00:13:25.855 "ffdhe2048", 00:13:25.855 "ffdhe3072", 00:13:25.855 "ffdhe4096", 00:13:25.855 "ffdhe6144", 00:13:25.855 "ffdhe8192" 00:13:25.855 ] 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "method": "bdev_nvme_set_hotplug", 00:13:25.855 "params": { 00:13:25.855 "period_us": 100000, 00:13:25.855 "enable": false 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "method": "bdev_malloc_create", 00:13:25.855 "params": { 00:13:25.855 "name": "malloc0", 00:13:25.855 "num_blocks": 8192, 00:13:25.855 "block_size": 4096, 00:13:25.855 "physical_block_size": 4096, 00:13:25.855 "uuid": "6f0992fc-e356-479b-9827-da5c4d158976", 00:13:25.855 "optimal_io_boundary": 0, 00:13:25.855 "md_size": 0, 00:13:25.855 "dif_type": 0, 00:13:25.855 "dif_is_head_of_md": false, 00:13:25.855 "dif_pi_format": 0 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "method": "bdev_wait_for_examine" 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "scsi", 00:13:25.855 "config": null 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "scheduler", 00:13:25.855 "config": [ 00:13:25.855 { 00:13:25.855 "method": "framework_set_scheduler", 00:13:25.855 "params": { 00:13:25.855 "name": "static" 00:13:25.855 } 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "vhost_scsi", 00:13:25.855 "config": [] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "vhost_blk", 00:13:25.855 "config": [] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "ublk", 00:13:25.855 "config": [ 00:13:25.855 { 00:13:25.855 "method": "ublk_create_target", 00:13:25.855 "params": { 00:13:25.855 "cpumask": "1" 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "method": "ublk_start_disk", 00:13:25.855 "params": { 00:13:25.855 "bdev_name": "malloc0", 00:13:25.855 "ublk_id": 0, 00:13:25.855 "num_queues": 1, 00:13:25.855 "queue_depth": 128 00:13:25.855 } 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "nbd", 00:13:25.855 "config": [] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "nvmf", 00:13:25.855 "config": [ 00:13:25.855 { 00:13:25.855 "method": "nvmf_set_config", 00:13:25.855 "params": { 00:13:25.855 "discovery_filter": "match_any", 00:13:25.855 "admin_cmd_passthru": { 00:13:25.855 "identify_ctrlr": false 00:13:25.855 }, 00:13:25.855 "dhchap_digests": [ 00:13:25.855 "sha256", 00:13:25.855 "sha384", 00:13:25.855 "sha512" 00:13:25.855 ], 00:13:25.855 "dhchap_dhgroups": [ 00:13:25.855 "null", 00:13:25.855 "ffdhe2048", 00:13:25.855 "ffdhe3072", 00:13:25.855 "ffdhe4096", 00:13:25.855 "ffdhe6144", 00:13:25.855 "ffdhe8192" 00:13:25.855 ] 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "method": "nvmf_set_max_subsystems", 00:13:25.855 "params": { 00:13:25.855 "max_subsystems": 1024 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "method": "nvmf_set_crdt", 00:13:25.855 "params": { 00:13:25.855 "crdt1": 0, 00:13:25.855 "crdt2": 0, 00:13:25.855 "crdt3": 0 00:13:25.855 } 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "subsystem": "iscsi", 00:13:25.855 "config": [ 00:13:25.855 { 00:13:25.855 "method": "iscsi_set_options", 00:13:25.855 "params": { 00:13:25.855 "node_base": "iqn.2016-06.io.spdk", 00:13:25.855 "max_sessions": 128, 00:13:25.855 "max_connections_per_session": 2, 00:13:25.855 "max_queue_depth": 64, 00:13:25.855 "default_time2wait": 2, 00:13:25.855 "default_time2retain": 20, 00:13:25.855 "first_burst_length": 8192, 00:13:25.855 "immediate_data": true, 00:13:25.855 "allow_duplicated_isid": false, 00:13:25.855 "error_recovery_level": 0, 00:13:25.855 "nop_timeout": 60, 00:13:25.855 "nop_in_interval": 30, 00:13:25.855 "disable_chap": false, 00:13:25.855 "require_chap": false, 00:13:25.855 "mutual_chap": false, 00:13:25.855 "chap_group": 0, 00:13:25.855 "max_large_datain_per_connection": 64, 00:13:25.855 "max_r2t_per_connection": 4, 00:13:25.855 "pdu_pool_size": 36864, 00:13:25.855 "immediate_data_pool_size": 16384, 00:13:25.855 "data_out_pool_size": 2048 00:13:25.855 } 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 }' 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70751 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70751 ']' 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70751 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70751 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:25.855 killing process with pid 70751 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70751' 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70751 00:13:25.855 04:33:48 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70751 00:13:27.241 [2024-11-03 04:33:50.139092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:27.241 [2024-11-03 04:33:50.180707] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:27.241 [2024-11-03 04:33:50.180866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:27.241 [2024-11-03 04:33:50.189616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:27.241 [2024-11-03 04:33:50.189683] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:27.241 [2024-11-03 04:33:50.189698] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:27.241 [2024-11-03 04:33:50.189731] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:27.241 [2024-11-03 04:33:50.189893] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70812 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70812 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70812 ']' 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:28.645 04:33:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:28.645 "subsystems": [ 00:13:28.645 { 00:13:28.645 "subsystem": "fsdev", 00:13:28.645 "config": [ 00:13:28.645 { 00:13:28.645 "method": "fsdev_set_opts", 00:13:28.645 "params": { 00:13:28.645 "fsdev_io_pool_size": 65535, 00:13:28.645 "fsdev_io_cache_size": 256 00:13:28.645 } 00:13:28.645 } 00:13:28.645 ] 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "subsystem": "keyring", 00:13:28.645 "config": [] 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "subsystem": "iobuf", 00:13:28.645 "config": [ 00:13:28.645 { 00:13:28.645 "method": "iobuf_set_options", 00:13:28.645 "params": { 00:13:28.645 "small_pool_count": 8192, 00:13:28.645 "large_pool_count": 1024, 00:13:28.645 "small_bufsize": 8192, 00:13:28.645 "large_bufsize": 135168, 00:13:28.645 "enable_numa": false 00:13:28.645 } 00:13:28.645 } 00:13:28.645 ] 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "subsystem": "sock", 00:13:28.645 "config": [ 00:13:28.645 { 00:13:28.645 "method": "sock_set_default_impl", 00:13:28.645 "params": { 00:13:28.645 "impl_name": "posix" 00:13:28.645 } 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "method": "sock_impl_set_options", 00:13:28.645 "params": { 00:13:28.645 "impl_name": "ssl", 00:13:28.645 "recv_buf_size": 4096, 00:13:28.645 "send_buf_size": 4096, 00:13:28.645 "enable_recv_pipe": true, 00:13:28.645 "enable_quickack": false, 00:13:28.645 "enable_placement_id": 0, 00:13:28.645 "enable_zerocopy_send_server": true, 00:13:28.645 "enable_zerocopy_send_client": false, 00:13:28.645 "zerocopy_threshold": 0, 00:13:28.645 "tls_version": 0, 00:13:28.645 "enable_ktls": false 00:13:28.645 } 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "method": "sock_impl_set_options", 00:13:28.645 "params": { 00:13:28.645 "impl_name": "posix", 00:13:28.645 "recv_buf_size": 2097152, 00:13:28.645 "send_buf_size": 2097152, 00:13:28.645 "enable_recv_pipe": true, 00:13:28.645 "enable_quickack": false, 00:13:28.645 "enable_placement_id": 0, 00:13:28.645 "enable_zerocopy_send_server": true, 00:13:28.645 "enable_zerocopy_send_client": false, 00:13:28.645 "zerocopy_threshold": 0, 00:13:28.645 "tls_version": 0, 00:13:28.645 "enable_ktls": false 00:13:28.645 } 00:13:28.645 } 00:13:28.645 ] 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "subsystem": "vmd", 00:13:28.645 "config": [] 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "subsystem": "accel", 00:13:28.645 "config": [ 00:13:28.645 { 00:13:28.645 "method": "accel_set_options", 00:13:28.645 "params": { 00:13:28.645 "small_cache_size": 128, 00:13:28.645 "large_cache_size": 16, 00:13:28.645 "task_count": 2048, 00:13:28.645 "sequence_count": 2048, 00:13:28.645 "buf_count": 2048 00:13:28.645 } 00:13:28.645 } 00:13:28.645 ] 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "subsystem": "bdev", 00:13:28.645 "config": [ 00:13:28.645 { 00:13:28.645 "method": "bdev_set_options", 00:13:28.645 "params": { 00:13:28.645 "bdev_io_pool_size": 65535, 00:13:28.645 "bdev_io_cache_size": 256, 00:13:28.645 "bdev_auto_examine": true, 00:13:28.645 "iobuf_small_cache_size": 128, 00:13:28.645 "iobuf_large_cache_size": 16 00:13:28.645 } 00:13:28.645 }, 00:13:28.645 { 00:13:28.645 "method": "bdev_raid_set_options", 00:13:28.645 "params": { 00:13:28.645 "process_window_size_kb": 1024, 00:13:28.645 "process_max_bandwidth_mb_sec": 0 00:13:28.645 } 00:13:28.645 }, 00:13:28.645 { 00:13:28.646 "method": "bdev_iscsi_set_options", 00:13:28.646 "params": { 00:13:28.646 "timeout_sec": 30 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "bdev_nvme_set_options", 00:13:28.646 "params": { 00:13:28.646 "action_on_timeout": "none", 00:13:28.646 "timeout_us": 0, 00:13:28.646 "timeout_admin_us": 0, 00:13:28.646 "keep_alive_timeout_ms": 10000, 00:13:28.646 "arbitration_burst": 0, 00:13:28.646 "low_priority_weight": 0, 00:13:28.646 "medium_priority_weight": 0, 00:13:28.646 "high_priority_weight": 0, 00:13:28.646 "nvme_adminq_poll_period_us": 10000, 00:13:28.646 "nvme_ioq_poll_period_us": 0, 00:13:28.646 "io_queue_requests": 0, 00:13:28.646 "delay_cmd_submit": true, 00:13:28.646 "transport_retry_count": 4, 00:13:28.646 "bdev_retry_count": 3, 00:13:28.646 "transport_ack_timeout": 0, 00:13:28.646 "ctrlr_loss_timeout_sec": 0, 00:13:28.646 "reconnect_delay_sec": 0, 00:13:28.646 "fast_io_fail_timeout_sec": 0, 00:13:28.646 "disable_auto_failback": false, 00:13:28.646 "generate_uuids": false, 00:13:28.646 "transport_tos": 0, 00:13:28.646 "nvme_error_stat": false, 00:13:28.646 "rdma_srq_size": 0, 00:13:28.646 "io_path_stat": false, 00:13:28.646 "allow_accel_sequence": false, 00:13:28.646 "rdma_max_cq_size": 0, 00:13:28.646 "rdma_cm_event_timeout_ms": 0, 00:13:28.646 "dhchap_digests": [ 00:13:28.646 "sha256", 00:13:28.646 "sha384", 00:13:28.646 "sha512" 00:13:28.646 ], 00:13:28.646 "dhchap_dhgroups": [ 00:13:28.646 "null", 00:13:28.646 "ffdhe2048", 00:13:28.646 "ffdhe3072", 00:13:28.646 "ffdhe4096", 00:13:28.646 "ffdhe6144", 00:13:28.646 "ffdhe8192" 00:13:28.646 ] 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "bdev_nvme_set_hotplug", 00:13:28.646 "params": { 00:13:28.646 "period_us": 100000, 00:13:28.646 "enable": false 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "bdev_malloc_create", 00:13:28.646 "params": { 00:13:28.646 "name": "malloc0", 00:13:28.646 "num_blocks": 8192, 00:13:28.646 "block_size": 4096, 00:13:28.646 "physical_block_size": 4096, 00:13:28.646 "uuid": "6f0992fc-e356-479b-9827-da5c4d158976", 00:13:28.646 "optimal_io_boundary": 0, 00:13:28.646 "md_size": 0, 00:13:28.646 "dif_type": 0, 00:13:28.646 "dif_is_head_of_md": false, 00:13:28.646 "dif_pi_format": 0 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "bdev_wait_for_examine" 00:13:28.646 } 00:13:28.646 ] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "scsi", 00:13:28.646 "config": null 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "scheduler", 00:13:28.646 "config": [ 00:13:28.646 { 00:13:28.646 "method": "framework_set_scheduler", 00:13:28.646 "params": { 00:13:28.646 "name": "static" 00:13:28.646 } 00:13:28.646 } 00:13:28.646 ] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "vhost_scsi", 00:13:28.646 "config": [] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "vhost_blk", 00:13:28.646 "config": [] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "ublk", 00:13:28.646 "config": [ 00:13:28.646 { 00:13:28.646 "method": "ublk_create_target", 00:13:28.646 "params": { 00:13:28.646 "cpumask": "1" 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "ublk_start_disk", 00:13:28.646 "params": { 00:13:28.646 "bdev_name": "malloc0", 00:13:28.646 "ublk_id": 0, 00:13:28.646 "num_queues": 1, 00:13:28.646 "queue_depth": 128 00:13:28.646 } 00:13:28.646 } 00:13:28.646 ] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "nbd", 00:13:28.646 "config": [] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "nvmf", 00:13:28.646 "config": [ 00:13:28.646 { 00:13:28.646 "method": "nvmf_set_config", 00:13:28.646 "params": { 00:13:28.646 "discovery_filter": "match_any", 00:13:28.646 "admin_cmd_passthru": { 00:13:28.646 "identify_ctrlr": false 00:13:28.646 }, 00:13:28.646 "dhchap_digests": [ 00:13:28.646 "sha256", 00:13:28.646 "sha384", 00:13:28.646 "sha512" 00:13:28.646 ], 00:13:28.646 "dhchap_dhgroups": [ 00:13:28.646 "null", 00:13:28.646 "ffdhe2048", 00:13:28.646 "ffdhe3072", 00:13:28.646 "ffdhe4096", 00:13:28.646 "ffdhe6144", 00:13:28.646 "ffdhe8192" 00:13:28.646 ] 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "nvmf_set_max_subsystems", 00:13:28.646 "params": { 00:13:28.646 "max_subsystems": 1024 00:13:28.646 } 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "method": "nvmf_set_crdt", 00:13:28.646 "params": { 00:13:28.646 "crdt1": 0, 00:13:28.646 "crdt2": 0, 00:13:28.646 "crdt3": 0 00:13:28.646 } 00:13:28.646 } 00:13:28.646 ] 00:13:28.646 }, 00:13:28.646 { 00:13:28.646 "subsystem": "iscsi", 00:13:28.646 "config": [ 00:13:28.646 { 00:13:28.646 "method": "iscsi_set_options", 00:13:28.646 "params": { 00:13:28.646 "node_base": "iqn.2016-06.io.spdk", 00:13:28.646 "max_sessions": 128, 00:13:28.646 "max_connections_per_session": 2, 00:13:28.646 "max_queue_depth": 64, 00:13:28.646 "default_time2wait": 2, 00:13:28.646 "default_time2retain": 20, 00:13:28.646 "first_burst_length": 8192, 00:13:28.646 "immediate_data": true, 00:13:28.646 "allow_duplicated_isid": false, 00:13:28.646 "error_recovery_level": 0, 00:13:28.646 "nop_timeout": 60, 00:13:28.646 "nop_in_interval": 30, 00:13:28.646 "disable_chap": false, 00:13:28.646 "require_chap": false, 00:13:28.646 "mutual_chap": false, 00:13:28.646 "chap_group": 0, 00:13:28.646 "max_large_datain_per_connection": 64, 00:13:28.646 "max_r2t_per_connection": 4, 00:13:28.646 "pdu_pool_size": 36864, 00:13:28.646 "immediate_data_pool_size": 16384, 00:13:28.646 "data_out_pool_size": 2048 00:13:28.646 } 00:13:28.646 } 00:13:28.646 ] 00:13:28.646 } 00:13:28.646 ] 00:13:28.646 }' 00:13:28.646 [2024-11-03 04:33:51.548283] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:28.646 [2024-11-03 04:33:51.548406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70812 ] 00:13:28.646 [2024-11-03 04:33:51.703555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.905 [2024-11-03 04:33:51.794770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.471 [2024-11-03 04:33:52.430580] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:29.471 [2024-11-03 04:33:52.431236] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:29.471 [2024-11-03 04:33:52.438668] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:29.471 [2024-11-03 04:33:52.438728] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:29.471 [2024-11-03 04:33:52.438735] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:29.471 [2024-11-03 04:33:52.438740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:29.471 [2024-11-03 04:33:52.447628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:29.471 [2024-11-03 04:33:52.447646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:29.471 [2024-11-03 04:33:52.454585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:29.471 [2024-11-03 04:33:52.454659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:29.471 [2024-11-03 04:33:52.471587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70812 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70812 ']' 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70812 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:29.471 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70812 00:13:29.730 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:29.730 killing process with pid 70812 00:13:29.730 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:29.730 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70812' 00:13:29.730 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70812 00:13:29.730 04:33:52 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70812 00:13:30.664 [2024-11-03 04:33:53.647030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:30.664 [2024-11-03 04:33:53.677645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:30.664 [2024-11-03 04:33:53.677736] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:30.664 [2024-11-03 04:33:53.685579] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:30.664 [2024-11-03 04:33:53.685619] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:30.664 [2024-11-03 04:33:53.685624] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:30.664 [2024-11-03 04:33:53.685644] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:30.664 [2024-11-03 04:33:53.685752] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:32.039 04:33:54 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:32.039 ************************************ 00:13:32.039 END TEST test_save_ublk_config 00:13:32.039 ************************************ 00:13:32.039 00:13:32.039 real 0m7.556s 00:13:32.039 user 0m5.003s 00:13:32.039 sys 0m3.200s 00:13:32.039 04:33:54 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.039 04:33:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.039 04:33:54 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70885 00:13:32.039 04:33:54 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:32.039 04:33:54 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70885 00:13:32.039 04:33:54 ublk -- common/autotest_common.sh@833 -- # '[' -z 70885 ']' 00:13:32.039 04:33:54 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.039 04:33:54 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.039 04:33:54 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.039 04:33:54 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:32.039 04:33:54 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:32.039 04:33:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.039 [2024-11-03 04:33:54.965888] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:32.039 [2024-11-03 04:33:54.966007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70885 ] 00:13:32.039 [2024-11-03 04:33:55.115330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:32.297 [2024-11-03 04:33:55.191748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.297 [2024-11-03 04:33:55.191842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.863 04:33:55 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.864 04:33:55 ublk -- common/autotest_common.sh@866 -- # return 0 00:13:32.864 04:33:55 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:32.864 04:33:55 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:32.864 04:33:55 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.864 04:33:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.864 ************************************ 00:13:32.864 START TEST test_create_ublk 00:13:32.864 ************************************ 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:13:32.864 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.864 [2024-11-03 04:33:55.766576] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:32.864 [2024-11-03 04:33:55.768082] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.864 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:32.864 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.864 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:32.864 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.864 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:32.864 [2024-11-03 04:33:55.926705] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:32.864 [2024-11-03 04:33:55.927019] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:32.864 [2024-11-03 04:33:55.927032] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:32.864 [2024-11-03 04:33:55.927038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.864 [2024-11-03 04:33:55.935762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.864 [2024-11-03 04:33:55.935779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.864 [2024-11-03 04:33:55.942591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:33.122 [2024-11-03 04:33:55.952622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:33.122 [2024-11-03 04:33:55.976591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:33.122 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.122 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:33.122 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:33.122 04:33:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:33.122 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.122 04:33:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:33.122 04:33:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:33.122 { 00:13:33.122 "ublk_device": "/dev/ublkb0", 00:13:33.122 "id": 0, 00:13:33.122 "queue_depth": 512, 00:13:33.122 "num_queues": 4, 00:13:33.122 "bdev_name": "Malloc0" 00:13:33.122 } 00:13:33.122 ]' 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:33.122 04:33:56 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:33.380 fio: verification read phase will never start because write phase uses all of runtime 00:13:33.380 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:33.380 fio-3.35 00:13:33.380 Starting 1 process 00:13:43.347 00:13:43.347 fio_test: (groupid=0, jobs=1): err= 0: pid=70924: Sun Nov 3 04:34:06 2024 00:13:43.347 write: IOPS=15.1k, BW=58.8MiB/s (61.7MB/s)(588MiB/10001msec); 0 zone resets 00:13:43.347 clat (usec): min=37, max=4046, avg=65.69, stdev=93.57 00:13:43.347 lat (usec): min=38, max=4046, avg=66.09, stdev=93.58 00:13:43.347 clat percentiles (usec): 00:13:43.347 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 58], 00:13:43.347 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 64], 00:13:43.347 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 74], 00:13:43.347 | 99.00th=[ 84], 99.50th=[ 91], 99.90th=[ 1893], 99.95th=[ 2802], 00:13:43.347 | 99.99th=[ 3294] 00:13:43.347 bw ( KiB/s): min=56512, max=69128, per=99.89%, avg=60173.89, stdev=2856.73, samples=19 00:13:43.347 iops : min=14128, max=17282, avg=15043.47, stdev=714.18, samples=19 00:13:43.347 lat (usec) : 50=5.16%, 100=94.49%, 250=0.15%, 500=0.02%, 750=0.01% 00:13:43.347 lat (usec) : 1000=0.01% 00:13:43.347 lat (msec) : 2=0.07%, 4=0.09%, 10=0.01% 00:13:43.347 cpu : usr=1.71%, sys=13.90%, ctx=150635, majf=0, minf=794 00:13:43.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.347 issued rwts: total=0,150620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:43.347 00:13:43.347 Run status group 0 (all jobs): 00:13:43.347 WRITE: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=588MiB (617MB), run=10001-10001msec 00:13:43.347 00:13:43.347 Disk stats (read/write): 00:13:43.347 ublkb0: ios=0/149013, merge=0/0, ticks=0/8222, in_queue=8223, util=99.10% 00:13:43.347 04:34:06 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:43.347 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.347 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.347 [2024-11-03 04:34:06.392084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:43.347 [2024-11-03 04:34:06.429110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:43.605 [2024-11-03 04:34:06.430088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:43.605 [2024-11-03 04:34:06.435588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:43.605 [2024-11-03 04:34:06.435844] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:43.605 [2024-11-03 04:34:06.435858] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.605 04:34:06 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.605 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.605 [2024-11-03 04:34:06.451645] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:43.605 request: 00:13:43.605 { 00:13:43.605 "ublk_id": 0, 00:13:43.605 "method": "ublk_stop_disk", 00:13:43.605 "req_id": 1 00:13:43.605 } 00:13:43.605 Got JSON-RPC error response 00:13:43.605 response: 00:13:43.605 { 00:13:43.605 "code": -19, 00:13:43.605 "message": "No such device" 00:13:43.605 } 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.606 04:34:06 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.606 [2024-11-03 04:34:06.467639] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:43.606 [2024-11-03 04:34:06.475577] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:43.606 [2024-11-03 04:34:06.475617] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.606 04:34:06 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.606 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.864 04:34:06 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:43.864 04:34:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:43.864 00:13:43.864 real 0m11.174s 00:13:43.864 user 0m0.467s 00:13:43.864 sys 0m1.473s 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:43.864 04:34:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.864 ************************************ 00:13:43.864 END TEST test_create_ublk 00:13:43.864 ************************************ 00:13:44.122 04:34:06 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:44.122 04:34:06 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:44.122 04:34:06 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.122 04:34:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.122 ************************************ 00:13:44.122 START TEST test_create_multi_ublk 00:13:44.122 ************************************ 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.122 [2024-11-03 04:34:06.986569] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:44.122 [2024-11-03 04:34:06.988082] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.122 04:34:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.122 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.122 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:44.122 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:44.122 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.122 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.122 [2024-11-03 04:34:07.202692] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:44.122 [2024-11-03 04:34:07.203001] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:44.122 [2024-11-03 04:34:07.203014] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:44.122 [2024-11-03 04:34:07.203022] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.380 [2024-11-03 04:34:07.214625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.380 [2024-11-03 04:34:07.214644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.380 [2024-11-03 04:34:07.226580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.380 [2024-11-03 04:34:07.227079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:44.380 [2024-11-03 04:34:07.272585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.380 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.380 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:44.380 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.380 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:44.380 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.380 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 [2024-11-03 04:34:07.491677] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:44.639 [2024-11-03 04:34:07.491976] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:44.639 [2024-11-03 04:34:07.491989] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:44.639 [2024-11-03 04:34:07.491994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.639 [2024-11-03 04:34:07.499597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.639 [2024-11-03 04:34:07.499613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.639 [2024-11-03 04:34:07.507586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.639 [2024-11-03 04:34:07.508077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:44.639 [2024-11-03 04:34:07.516612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.639 [2024-11-03 04:34:07.675681] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:44.639 [2024-11-03 04:34:07.675984] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:44.639 [2024-11-03 04:34:07.675996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:44.639 [2024-11-03 04:34:07.676004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.639 [2024-11-03 04:34:07.683601] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.639 [2024-11-03 04:34:07.683621] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.639 [2024-11-03 04:34:07.691604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.639 [2024-11-03 04:34:07.692106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:44.639 [2024-11-03 04:34:07.700607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.639 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.897 [2024-11-03 04:34:07.859698] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:44.897 [2024-11-03 04:34:07.859996] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:44.897 [2024-11-03 04:34:07.860010] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:44.897 [2024-11-03 04:34:07.860015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.897 [2024-11-03 04:34:07.867599] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.897 [2024-11-03 04:34:07.867615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.897 [2024-11-03 04:34:07.875593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.897 [2024-11-03 04:34:07.876079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:44.897 [2024-11-03 04:34:07.884618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:44.897 { 00:13:44.897 "ublk_device": "/dev/ublkb0", 00:13:44.897 "id": 0, 00:13:44.897 "queue_depth": 512, 00:13:44.897 "num_queues": 4, 00:13:44.897 "bdev_name": "Malloc0" 00:13:44.897 }, 00:13:44.897 { 00:13:44.897 "ublk_device": "/dev/ublkb1", 00:13:44.897 "id": 1, 00:13:44.897 "queue_depth": 512, 00:13:44.897 "num_queues": 4, 00:13:44.897 "bdev_name": "Malloc1" 00:13:44.897 }, 00:13:44.897 { 00:13:44.897 "ublk_device": "/dev/ublkb2", 00:13:44.897 "id": 2, 00:13:44.897 "queue_depth": 512, 00:13:44.897 "num_queues": 4, 00:13:44.897 "bdev_name": "Malloc2" 00:13:44.897 }, 00:13:44.897 { 00:13:44.897 "ublk_device": "/dev/ublkb3", 00:13:44.897 "id": 3, 00:13:44.897 "queue_depth": 512, 00:13:44.897 "num_queues": 4, 00:13:44.897 "bdev_name": "Malloc3" 00:13:44.897 } 00:13:44.897 ]' 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:44.897 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:44.898 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:44.898 04:34:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.156 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:45.414 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:45.673 [2024-11-03 04:34:08.579660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.673 [2024-11-03 04:34:08.619013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.673 [2024-11-03 04:34:08.620237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.673 [2024-11-03 04:34:08.626596] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.673 [2024-11-03 04:34:08.626850] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:45.673 [2024-11-03 04:34:08.626864] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:45.673 [2024-11-03 04:34:08.642636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.673 [2024-11-03 04:34:08.674629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.673 [2024-11-03 04:34:08.675470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.673 [2024-11-03 04:34:08.682595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.673 [2024-11-03 04:34:08.682823] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:45.673 [2024-11-03 04:34:08.682837] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:45.673 [2024-11-03 04:34:08.698652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.673 [2024-11-03 04:34:08.738998] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.673 [2024-11-03 04:34:08.740160] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.673 [2024-11-03 04:34:08.747617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.673 [2024-11-03 04:34:08.747853] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:45.673 [2024-11-03 04:34:08.747866] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.673 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:45.931 [2024-11-03 04:34:08.762640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.931 [2024-11-03 04:34:08.802626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.931 [2024-11-03 04:34:08.803308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.931 [2024-11-03 04:34:08.810602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.931 [2024-11-03 04:34:08.810848] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:45.931 [2024-11-03 04:34:08.810861] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:45.931 04:34:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.931 04:34:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:45.931 [2024-11-03 04:34:09.010635] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:46.190 [2024-11-03 04:34:09.018941] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:46.190 [2024-11-03 04:34:09.018970] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:46.190 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:46.190 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:46.190 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:46.190 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.190 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:46.448 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.448 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:46.448 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:46.448 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.448 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:46.715 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.715 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:46.716 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:46.716 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.716 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:46.975 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.975 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:46.975 04:34:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:46.975 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.975 04:34:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:47.233 ************************************ 00:13:47.233 END TEST test_create_multi_ublk 00:13:47.233 ************************************ 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:47.233 00:13:47.233 real 0m3.251s 00:13:47.233 user 0m0.845s 00:13:47.233 sys 0m0.142s 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.233 04:34:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 04:34:10 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:47.233 04:34:10 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:47.233 04:34:10 ublk -- ublk/ublk.sh@130 -- # killprocess 70885 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@952 -- # '[' -z 70885 ']' 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@956 -- # kill -0 70885 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@957 -- # uname 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70885 00:13:47.233 killing process with pid 70885 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70885' 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@971 -- # kill 70885 00:13:47.233 04:34:10 ublk -- common/autotest_common.sh@976 -- # wait 70885 00:13:47.798 [2024-11-03 04:34:10.798230] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:47.798 [2024-11-03 04:34:10.798277] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:48.365 00:13:48.365 real 0m24.330s 00:13:48.365 user 0m34.072s 00:13:48.365 sys 0m10.059s 00:13:48.365 04:34:11 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.365 ************************************ 00:13:48.365 END TEST ublk 00:13:48.365 ************************************ 00:13:48.365 04:34:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.625 04:34:11 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:48.625 04:34:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:48.625 04:34:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.625 04:34:11 -- common/autotest_common.sh@10 -- # set +x 00:13:48.625 ************************************ 00:13:48.625 START TEST ublk_recovery 00:13:48.625 ************************************ 00:13:48.625 04:34:11 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:48.625 * Looking for test storage... 00:13:48.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:48.625 04:34:11 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.625 04:34:11 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.626 04:34:11 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.626 --rc genhtml_branch_coverage=1 00:13:48.626 --rc genhtml_function_coverage=1 00:13:48.626 --rc genhtml_legend=1 00:13:48.626 --rc geninfo_all_blocks=1 00:13:48.626 --rc geninfo_unexecuted_blocks=1 00:13:48.626 00:13:48.626 ' 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.626 --rc genhtml_branch_coverage=1 00:13:48.626 --rc genhtml_function_coverage=1 00:13:48.626 --rc genhtml_legend=1 00:13:48.626 --rc geninfo_all_blocks=1 00:13:48.626 --rc geninfo_unexecuted_blocks=1 00:13:48.626 00:13:48.626 ' 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.626 --rc genhtml_branch_coverage=1 00:13:48.626 --rc genhtml_function_coverage=1 00:13:48.626 --rc genhtml_legend=1 00:13:48.626 --rc geninfo_all_blocks=1 00:13:48.626 --rc geninfo_unexecuted_blocks=1 00:13:48.626 00:13:48.626 ' 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.626 --rc genhtml_branch_coverage=1 00:13:48.626 --rc genhtml_function_coverage=1 00:13:48.626 --rc genhtml_legend=1 00:13:48.626 --rc geninfo_all_blocks=1 00:13:48.626 --rc geninfo_unexecuted_blocks=1 00:13:48.626 00:13:48.626 ' 00:13:48.626 04:34:11 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:48.626 04:34:11 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:48.626 04:34:11 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:48.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.626 04:34:11 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71271 00:13:48.626 04:34:11 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.626 04:34:11 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71271 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71271 ']' 00:13:48.626 04:34:11 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:48.626 04:34:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.887 [2024-11-03 04:34:11.726220] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:48.887 [2024-11-03 04:34:11.726335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71271 ] 00:13:48.887 [2024-11-03 04:34:11.885139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:49.147 [2024-11-03 04:34:11.990485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.147 [2024-11-03 04:34:11.990632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:13:49.718 04:34:12 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.718 [2024-11-03 04:34:12.589581] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:49.718 [2024-11-03 04:34:12.591471] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.718 04:34:12 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.718 malloc0 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.718 04:34:12 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:49.718 [2024-11-03 04:34:12.693717] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:49.718 [2024-11-03 04:34:12.693815] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:49.718 [2024-11-03 04:34:12.693826] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:49.718 [2024-11-03 04:34:12.693835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.718 [2024-11-03 04:34:12.702675] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.718 [2024-11-03 04:34:12.702694] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.718 [2024-11-03 04:34:12.709587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.718 [2024-11-03 04:34:12.709725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:49.718 [2024-11-03 04:34:12.732597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.718 1 00:13:49.718 04:34:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.718 04:34:12 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:50.717 04:34:13 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71307 00:13:50.717 04:34:13 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:50.717 04:34:13 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:50.978 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.978 fio-3.35 00:13:50.978 Starting 1 process 00:13:56.252 04:34:18 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71271 00:13:56.252 04:34:18 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:01.540 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71271 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:01.540 04:34:23 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71420 00:14:01.540 04:34:23 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:01.541 04:34:23 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.541 04:34:23 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71420 00:14:01.541 04:34:23 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71420 ']' 00:14:01.541 04:34:23 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.541 04:34:23 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.541 04:34:23 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.541 04:34:23 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.541 04:34:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.541 [2024-11-03 04:34:23.845962] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:14:01.541 [2024-11-03 04:34:23.846377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71420 ] 00:14:01.541 [2024-11-03 04:34:24.010179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.541 [2024-11-03 04:34:24.133746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.541 [2024-11-03 04:34:24.133855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:01.801 04:34:24 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.801 [2024-11-03 04:34:24.837595] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:01.801 [2024-11-03 04:34:24.839886] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.801 04:34:24 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.801 04:34:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.061 malloc0 00:14:02.061 04:34:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.061 04:34:24 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:02.061 04:34:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.061 04:34:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.061 [2024-11-03 04:34:24.967767] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:02.061 [2024-11-03 04:34:24.967818] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:02.061 [2024-11-03 04:34:24.967830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:02.061 [2024-11-03 04:34:24.975636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:02.061 [2024-11-03 04:34:24.975665] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:02.061 1 00:14:02.061 04:34:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.061 04:34:24 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71307 00:14:03.002 [2024-11-03 04:34:25.975713] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:03.002 [2024-11-03 04:34:25.982591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:03.002 [2024-11-03 04:34:25.982610] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:03.936 [2024-11-03 04:34:26.982641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:03.936 [2024-11-03 04:34:26.991583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:03.936 [2024-11-03 04:34:26.991603] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:05.308 [2024-11-03 04:34:27.991638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:05.308 [2024-11-03 04:34:27.996591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:05.308 [2024-11-03 04:34:27.996604] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:05.308 [2024-11-03 04:34:27.996612] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:05.308 [2024-11-03 04:34:27.996682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:27.224 [2024-11-03 04:34:48.825584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:27.224 [2024-11-03 04:34:48.832126] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:27.224 [2024-11-03 04:34:48.839761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:27.224 [2024-11-03 04:34:48.839831] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:53.813 00:14:53.813 fio_test: (groupid=0, jobs=1): err= 0: pid=71310: Sun Nov 3 04:35:13 2024 00:14:53.813 read: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(3163MiB/60002msec) 00:14:53.813 slat (nsec): min=1174, max=2082.5k, avg=5487.94, stdev=2816.10 00:14:53.813 clat (usec): min=1306, max=30101k, avg=4398.52, stdev=250299.88 00:14:53.813 lat (usec): min=1316, max=30101k, avg=4404.00, stdev=250299.87 00:14:53.813 clat percentiles (usec): 00:14:53.813 | 1.00th=[ 1860], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2057], 00:14:53.813 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:14:53.813 | 70.00th=[ 2212], 80.00th=[ 2278], 90.00th=[ 2638], 95.00th=[ 3359], 00:14:53.813 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7767], 99.95th=[ 8848], 00:14:53.813 | 99.99th=[13304] 00:14:53.813 bw ( KiB/s): min=15448, max=121184, per=100.00%, avg=106296.53, stdev=18012.08, samples=60 00:14:53.813 iops : min= 3862, max=30296, avg=26574.13, stdev=4503.02, samples=60 00:14:53.813 write: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(3159MiB/60002msec); 0 zone resets 00:14:53.813 slat (nsec): min=1175, max=1680.9k, avg=5643.92, stdev=2555.87 00:14:53.813 clat (usec): min=1188, max=30101k, avg=5081.56, stdev=283997.93 00:14:53.813 lat (usec): min=1199, max=30101k, avg=5087.20, stdev=283997.92 00:14:53.813 clat percentiles (usec): 00:14:53.813 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2089], 20.00th=[ 2147], 00:14:53.813 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:14:53.813 | 70.00th=[ 2311], 80.00th=[ 2376], 90.00th=[ 2704], 95.00th=[ 3294], 00:14:53.813 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 7898], 99.95th=[ 8848], 00:14:53.813 | 99.99th=[13435] 00:14:53.813 bw ( KiB/s): min=15408, max=120936, per=100.00%, avg=106124.40, stdev=17749.00, samples=60 00:14:53.813 iops : min= 3852, max=30234, avg=26531.10, stdev=4437.25, samples=60 00:14:53.813 lat (msec) : 2=6.29%, 4=90.23%, 10=3.44%, 20=0.03%, >=2000=0.01% 00:14:53.813 cpu : usr=3.00%, sys=15.51%, ctx=53868, majf=0, minf=13 00:14:53.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:53.813 issued rwts: total=809675,808614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:53.813 00:14:53.813 Run status group 0 (all jobs): 00:14:53.813 READ: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=3163MiB (3316MB), run=60002-60002msec 00:14:53.813 WRITE: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=3159MiB (3312MB), run=60002-60002msec 00:14:53.813 00:14:53.813 Disk stats (read/write): 00:14:53.813 ublkb1: ios=806650/805544, merge=0/0, ticks=3504804/3983475, in_queue=7488280, util=99.91% 00:14:53.813 04:35:13 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:53.813 04:35:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.813 04:35:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:53.813 [2024-11-03 04:35:13.996995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:53.813 [2024-11-03 04:35:14.026701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:53.813 [2024-11-03 04:35:14.026950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:53.813 [2024-11-03 04:35:14.035594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:53.813 [2024-11-03 04:35:14.035775] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:53.813 [2024-11-03 04:35:14.035835] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.813 04:35:14 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:53.813 [2024-11-03 04:35:14.045677] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:53.813 [2024-11-03 04:35:14.051575] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:53.813 [2024-11-03 04:35:14.051610] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.813 04:35:14 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:53.813 04:35:14 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:53.813 04:35:14 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71420 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71420 ']' 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71420 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71420 00:14:53.813 killing process with pid 71420 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71420' 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71420 00:14:53.813 04:35:14 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71420 00:14:53.813 [2024-11-03 04:35:15.141904] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:53.813 [2024-11-03 04:35:15.141956] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:53.813 ************************************ 00:14:53.813 END TEST ublk_recovery 00:14:53.813 ************************************ 00:14:53.813 00:14:53.813 real 1m4.391s 00:14:53.813 user 1m44.601s 00:14:53.813 sys 0m24.485s 00:14:53.813 04:35:15 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.813 04:35:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:53.813 04:35:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:53.813 04:35:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:53.813 04:35:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.813 04:35:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:53.813 04:35:15 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:53.813 04:35:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:53.813 04:35:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.813 04:35:15 -- common/autotest_common.sh@10 -- # set +x 00:14:53.813 ************************************ 00:14:53.813 START TEST ftl 00:14:53.813 ************************************ 00:14:53.813 04:35:15 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:53.813 * Looking for test storage... 00:14:53.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.813 04:35:16 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.813 04:35:16 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.813 04:35:16 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.813 04:35:16 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.813 04:35:16 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.813 04:35:16 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:53.813 04:35:16 ftl -- scripts/common.sh@345 -- # : 1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.813 04:35:16 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.813 04:35:16 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@353 -- # local d=1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.813 04:35:16 ftl -- scripts/common.sh@355 -- # echo 1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.813 04:35:16 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@353 -- # local d=2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.813 04:35:16 ftl -- scripts/common.sh@355 -- # echo 2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.813 04:35:16 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.813 04:35:16 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.813 04:35:16 ftl -- scripts/common.sh@368 -- # return 0 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.813 --rc genhtml_branch_coverage=1 00:14:53.813 --rc genhtml_function_coverage=1 00:14:53.813 --rc genhtml_legend=1 00:14:53.813 --rc geninfo_all_blocks=1 00:14:53.813 --rc geninfo_unexecuted_blocks=1 00:14:53.813 00:14:53.813 ' 00:14:53.813 04:35:16 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.813 --rc genhtml_branch_coverage=1 00:14:53.813 --rc genhtml_function_coverage=1 00:14:53.813 --rc genhtml_legend=1 00:14:53.813 --rc geninfo_all_blocks=1 00:14:53.814 --rc geninfo_unexecuted_blocks=1 00:14:53.814 00:14:53.814 ' 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:53.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.814 --rc genhtml_branch_coverage=1 00:14:53.814 --rc genhtml_function_coverage=1 00:14:53.814 --rc genhtml_legend=1 00:14:53.814 --rc geninfo_all_blocks=1 00:14:53.814 --rc geninfo_unexecuted_blocks=1 00:14:53.814 00:14:53.814 ' 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:53.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.814 --rc genhtml_branch_coverage=1 00:14:53.814 --rc genhtml_function_coverage=1 00:14:53.814 --rc genhtml_legend=1 00:14:53.814 --rc geninfo_all_blocks=1 00:14:53.814 --rc geninfo_unexecuted_blocks=1 00:14:53.814 00:14:53.814 ' 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:53.814 04:35:16 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:53.814 04:35:16 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.814 04:35:16 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.814 04:35:16 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:53.814 04:35:16 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:53.814 04:35:16 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.814 04:35:16 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:53.814 04:35:16 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:53.814 04:35:16 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.814 04:35:16 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.814 04:35:16 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:53.814 04:35:16 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:53.814 04:35:16 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:53.814 04:35:16 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:53.814 04:35:16 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:53.814 04:35:16 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:53.814 04:35:16 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.814 04:35:16 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.814 04:35:16 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:53.814 04:35:16 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:53.814 04:35:16 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:53.814 04:35:16 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:53.814 04:35:16 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:53.814 04:35:16 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:53.814 04:35:16 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:53.814 04:35:16 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:53.814 04:35:16 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:53.814 04:35:16 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:53.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:53.814 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.814 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.814 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.814 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72230 00:14:53.814 04:35:16 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72230 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@833 -- # '[' -z 72230 ']' 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.814 04:35:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:53.814 [2024-11-03 04:35:16.673636] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:14:53.814 [2024-11-03 04:35:16.673736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72230 ] 00:14:53.814 [2024-11-03 04:35:16.831752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.076 [2024-11-03 04:35:16.928663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.665 04:35:17 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.665 04:35:17 ftl -- common/autotest_common.sh@866 -- # return 0 00:14:54.665 04:35:17 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:54.665 04:35:17 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:55.609 04:35:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:55.609 04:35:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:56.176 04:35:18 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:56.176 04:35:18 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:56.176 04:35:18 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@50 -- # break 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:56.176 04:35:19 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:56.435 04:35:19 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:56.435 04:35:19 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:56.435 04:35:19 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:56.435 04:35:19 ftl -- ftl/ftl.sh@63 -- # break 00:14:56.435 04:35:19 ftl -- ftl/ftl.sh@66 -- # killprocess 72230 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@952 -- # '[' -z 72230 ']' 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@956 -- # kill -0 72230 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@957 -- # uname 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72230 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:56.435 killing process with pid 72230 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72230' 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@971 -- # kill 72230 00:14:56.435 04:35:19 ftl -- common/autotest_common.sh@976 -- # wait 72230 00:14:57.813 04:35:20 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:57.813 04:35:20 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:57.813 04:35:20 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:57.813 04:35:20 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:57.813 04:35:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:57.813 ************************************ 00:14:57.813 START TEST ftl_fio_basic 00:14:57.813 ************************************ 00:14:57.813 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:57.813 * Looking for test storage... 00:14:57.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:57.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.814 --rc genhtml_branch_coverage=1 00:14:57.814 --rc genhtml_function_coverage=1 00:14:57.814 --rc genhtml_legend=1 00:14:57.814 --rc geninfo_all_blocks=1 00:14:57.814 --rc geninfo_unexecuted_blocks=1 00:14:57.814 00:14:57.814 ' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:57.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.814 --rc genhtml_branch_coverage=1 00:14:57.814 --rc genhtml_function_coverage=1 00:14:57.814 --rc genhtml_legend=1 00:14:57.814 --rc geninfo_all_blocks=1 00:14:57.814 --rc geninfo_unexecuted_blocks=1 00:14:57.814 00:14:57.814 ' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:57.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.814 --rc genhtml_branch_coverage=1 00:14:57.814 --rc genhtml_function_coverage=1 00:14:57.814 --rc genhtml_legend=1 00:14:57.814 --rc geninfo_all_blocks=1 00:14:57.814 --rc geninfo_unexecuted_blocks=1 00:14:57.814 00:14:57.814 ' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:57.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.814 --rc genhtml_branch_coverage=1 00:14:57.814 --rc genhtml_function_coverage=1 00:14:57.814 --rc genhtml_legend=1 00:14:57.814 --rc geninfo_all_blocks=1 00:14:57.814 --rc geninfo_unexecuted_blocks=1 00:14:57.814 00:14:57.814 ' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:57.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72362 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72362 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72362 ']' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:57.814 04:35:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:57.814 [2024-11-03 04:35:20.818822] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:14:57.814 [2024-11-03 04:35:20.818948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72362 ] 00:14:58.074 [2024-11-03 04:35:20.972776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.074 [2024-11-03 04:35:21.071692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.074 [2024-11-03 04:35:21.072228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.074 [2024-11-03 04:35:21.072268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:58.640 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:14:58.899 04:35:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:59.157 { 00:14:59.157 "name": "nvme0n1", 00:14:59.157 "aliases": [ 00:14:59.157 "751bba1a-47df-4f70-945b-757b3e4c86c2" 00:14:59.157 ], 00:14:59.157 "product_name": "NVMe disk", 00:14:59.157 "block_size": 4096, 00:14:59.157 "num_blocks": 1310720, 00:14:59.157 "uuid": "751bba1a-47df-4f70-945b-757b3e4c86c2", 00:14:59.157 "numa_id": -1, 00:14:59.157 "assigned_rate_limits": { 00:14:59.157 "rw_ios_per_sec": 0, 00:14:59.157 "rw_mbytes_per_sec": 0, 00:14:59.157 "r_mbytes_per_sec": 0, 00:14:59.157 "w_mbytes_per_sec": 0 00:14:59.157 }, 00:14:59.157 "claimed": false, 00:14:59.157 "zoned": false, 00:14:59.157 "supported_io_types": { 00:14:59.157 "read": true, 00:14:59.157 "write": true, 00:14:59.157 "unmap": true, 00:14:59.157 "flush": true, 00:14:59.157 "reset": true, 00:14:59.157 "nvme_admin": true, 00:14:59.157 "nvme_io": true, 00:14:59.157 "nvme_io_md": false, 00:14:59.157 "write_zeroes": true, 00:14:59.157 "zcopy": false, 00:14:59.157 "get_zone_info": false, 00:14:59.157 "zone_management": false, 00:14:59.157 "zone_append": false, 00:14:59.157 "compare": true, 00:14:59.157 "compare_and_write": false, 00:14:59.157 "abort": true, 00:14:59.157 "seek_hole": false, 00:14:59.157 "seek_data": false, 00:14:59.157 "copy": true, 00:14:59.157 "nvme_iov_md": false 00:14:59.157 }, 00:14:59.157 "driver_specific": { 00:14:59.157 "nvme": [ 00:14:59.157 { 00:14:59.157 "pci_address": "0000:00:11.0", 00:14:59.157 "trid": { 00:14:59.157 "trtype": "PCIe", 00:14:59.157 "traddr": "0000:00:11.0" 00:14:59.157 }, 00:14:59.157 "ctrlr_data": { 00:14:59.157 "cntlid": 0, 00:14:59.157 "vendor_id": "0x1b36", 00:14:59.157 "model_number": "QEMU NVMe Ctrl", 00:14:59.157 "serial_number": "12341", 00:14:59.157 "firmware_revision": "8.0.0", 00:14:59.157 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:59.157 "oacs": { 00:14:59.157 "security": 0, 00:14:59.157 "format": 1, 00:14:59.157 "firmware": 0, 00:14:59.157 "ns_manage": 1 00:14:59.157 }, 00:14:59.157 "multi_ctrlr": false, 00:14:59.157 "ana_reporting": false 00:14:59.157 }, 00:14:59.157 "vs": { 00:14:59.157 "nvme_version": "1.4" 00:14:59.157 }, 00:14:59.157 "ns_data": { 00:14:59.157 "id": 1, 00:14:59.157 "can_share": false 00:14:59.157 } 00:14:59.157 } 00:14:59.157 ], 00:14:59.157 "mp_policy": "active_passive" 00:14:59.157 } 00:14:59.157 } 00:14:59.157 ]' 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:59.157 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:59.416 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:59.416 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:59.674 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=85201ff3-43f3-4998-98ad-5b07ff9634e8 00:14:59.674 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 85201ff3-43f3-4998-98ad-5b07ff9634e8 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:14:59.932 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:14:59.933 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:14:59.933 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:14:59.933 04:35:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:14:59.933 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:14:59.933 { 00:14:59.933 "name": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:14:59.933 "aliases": [ 00:14:59.933 "lvs/nvme0n1p0" 00:14:59.933 ], 00:14:59.933 "product_name": "Logical Volume", 00:14:59.933 "block_size": 4096, 00:14:59.933 "num_blocks": 26476544, 00:14:59.933 "uuid": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:14:59.933 "assigned_rate_limits": { 00:14:59.933 "rw_ios_per_sec": 0, 00:14:59.933 "rw_mbytes_per_sec": 0, 00:14:59.933 "r_mbytes_per_sec": 0, 00:14:59.933 "w_mbytes_per_sec": 0 00:14:59.933 }, 00:14:59.933 "claimed": false, 00:14:59.933 "zoned": false, 00:14:59.933 "supported_io_types": { 00:14:59.933 "read": true, 00:14:59.933 "write": true, 00:14:59.933 "unmap": true, 00:14:59.933 "flush": false, 00:14:59.933 "reset": true, 00:14:59.933 "nvme_admin": false, 00:14:59.933 "nvme_io": false, 00:14:59.933 "nvme_io_md": false, 00:14:59.933 "write_zeroes": true, 00:14:59.933 "zcopy": false, 00:14:59.933 "get_zone_info": false, 00:14:59.933 "zone_management": false, 00:14:59.933 "zone_append": false, 00:14:59.933 "compare": false, 00:14:59.933 "compare_and_write": false, 00:14:59.933 "abort": false, 00:14:59.933 "seek_hole": true, 00:14:59.933 "seek_data": true, 00:14:59.933 "copy": false, 00:14:59.933 "nvme_iov_md": false 00:14:59.933 }, 00:14:59.933 "driver_specific": { 00:14:59.933 "lvol": { 00:14:59.933 "lvol_store_uuid": "85201ff3-43f3-4998-98ad-5b07ff9634e8", 00:14:59.933 "base_bdev": "nvme0n1", 00:14:59.933 "thin_provision": true, 00:14:59.933 "num_allocated_clusters": 0, 00:14:59.933 "snapshot": false, 00:14:59.933 "clone": false, 00:14:59.933 "esnap_clone": false 00:14:59.933 } 00:14:59.933 } 00:14:59.933 } 00:14:59.933 ]' 00:14:59.933 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:00.191 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:00.449 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:00.708 { 00:15:00.708 "name": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:15:00.708 "aliases": [ 00:15:00.708 "lvs/nvme0n1p0" 00:15:00.708 ], 00:15:00.708 "product_name": "Logical Volume", 00:15:00.708 "block_size": 4096, 00:15:00.708 "num_blocks": 26476544, 00:15:00.708 "uuid": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:15:00.708 "assigned_rate_limits": { 00:15:00.708 "rw_ios_per_sec": 0, 00:15:00.708 "rw_mbytes_per_sec": 0, 00:15:00.708 "r_mbytes_per_sec": 0, 00:15:00.708 "w_mbytes_per_sec": 0 00:15:00.708 }, 00:15:00.708 "claimed": false, 00:15:00.708 "zoned": false, 00:15:00.708 "supported_io_types": { 00:15:00.708 "read": true, 00:15:00.708 "write": true, 00:15:00.708 "unmap": true, 00:15:00.708 "flush": false, 00:15:00.708 "reset": true, 00:15:00.708 "nvme_admin": false, 00:15:00.708 "nvme_io": false, 00:15:00.708 "nvme_io_md": false, 00:15:00.708 "write_zeroes": true, 00:15:00.708 "zcopy": false, 00:15:00.708 "get_zone_info": false, 00:15:00.708 "zone_management": false, 00:15:00.708 "zone_append": false, 00:15:00.708 "compare": false, 00:15:00.708 "compare_and_write": false, 00:15:00.708 "abort": false, 00:15:00.708 "seek_hole": true, 00:15:00.708 "seek_data": true, 00:15:00.708 "copy": false, 00:15:00.708 "nvme_iov_md": false 00:15:00.708 }, 00:15:00.708 "driver_specific": { 00:15:00.708 "lvol": { 00:15:00.708 "lvol_store_uuid": "85201ff3-43f3-4998-98ad-5b07ff9634e8", 00:15:00.708 "base_bdev": "nvme0n1", 00:15:00.708 "thin_provision": true, 00:15:00.708 "num_allocated_clusters": 0, 00:15:00.708 "snapshot": false, 00:15:00.708 "clone": false, 00:15:00.708 "esnap_clone": false 00:15:00.708 } 00:15:00.708 } 00:15:00.708 } 00:15:00.708 ]' 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:00.708 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:00.708 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b 00:15:00.966 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:00.966 { 00:15:00.966 "name": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:15:00.966 "aliases": [ 00:15:00.966 "lvs/nvme0n1p0" 00:15:00.966 ], 00:15:00.966 "product_name": "Logical Volume", 00:15:00.966 "block_size": 4096, 00:15:00.966 "num_blocks": 26476544, 00:15:00.966 "uuid": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:15:00.966 "assigned_rate_limits": { 00:15:00.966 "rw_ios_per_sec": 0, 00:15:00.966 "rw_mbytes_per_sec": 0, 00:15:00.966 "r_mbytes_per_sec": 0, 00:15:00.966 "w_mbytes_per_sec": 0 00:15:00.966 }, 00:15:00.966 "claimed": false, 00:15:00.966 "zoned": false, 00:15:00.966 "supported_io_types": { 00:15:00.966 "read": true, 00:15:00.966 "write": true, 00:15:00.966 "unmap": true, 00:15:00.966 "flush": false, 00:15:00.966 "reset": true, 00:15:00.966 "nvme_admin": false, 00:15:00.966 "nvme_io": false, 00:15:00.966 "nvme_io_md": false, 00:15:00.966 "write_zeroes": true, 00:15:00.966 "zcopy": false, 00:15:00.966 "get_zone_info": false, 00:15:00.966 "zone_management": false, 00:15:00.966 "zone_append": false, 00:15:00.966 "compare": false, 00:15:00.966 "compare_and_write": false, 00:15:00.966 "abort": false, 00:15:00.966 "seek_hole": true, 00:15:00.966 "seek_data": true, 00:15:00.966 "copy": false, 00:15:00.966 "nvme_iov_md": false 00:15:00.966 }, 00:15:00.966 "driver_specific": { 00:15:00.966 "lvol": { 00:15:00.966 "lvol_store_uuid": "85201ff3-43f3-4998-98ad-5b07ff9634e8", 00:15:00.966 "base_bdev": "nvme0n1", 00:15:00.966 "thin_provision": true, 00:15:00.967 "num_allocated_clusters": 0, 00:15:00.967 "snapshot": false, 00:15:00.967 "clone": false, 00:15:00.967 "esnap_clone": false 00:15:00.967 } 00:15:00.967 } 00:15:00.967 } 00:15:00.967 ]' 00:15:00.967 04:35:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:00.967 04:35:24 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ef5a551-8fe7-4c11-9ff6-69bec3931f1b -c nvc0n1p0 --l2p_dram_limit 60 00:15:01.226 [2024-11-03 04:35:24.224418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.224452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:01.226 [2024-11-03 04:35:24.224464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:01.226 [2024-11-03 04:35:24.224471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.224521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.224529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:01.226 [2024-11-03 04:35:24.224536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:01.226 [2024-11-03 04:35:24.224544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.224592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:01.226 [2024-11-03 04:35:24.225198] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:01.226 [2024-11-03 04:35:24.225217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.225223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:01.226 [2024-11-03 04:35:24.225231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:15:01.226 [2024-11-03 04:35:24.225238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.225276] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4d065b61-4e66-40ee-8893-3861651af5f8 00:15:01.226 [2024-11-03 04:35:24.226302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.226321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:01.226 [2024-11-03 04:35:24.226330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:01.226 [2024-11-03 04:35:24.226338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.231499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.231525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:01.226 [2024-11-03 04:35:24.231533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.074 ms 00:15:01.226 [2024-11-03 04:35:24.231541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.231634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.231645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:01.226 [2024-11-03 04:35:24.231651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:01.226 [2024-11-03 04:35:24.231660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.231707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.231716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:01.226 [2024-11-03 04:35:24.231722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:01.226 [2024-11-03 04:35:24.231729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.231757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:01.226 [2024-11-03 04:35:24.234668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.234690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:01.226 [2024-11-03 04:35:24.234699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:15:01.226 [2024-11-03 04:35:24.234705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.226 [2024-11-03 04:35:24.234736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.226 [2024-11-03 04:35:24.234744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:01.226 [2024-11-03 04:35:24.234752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:01.226 [2024-11-03 04:35:24.234757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.227 [2024-11-03 04:35:24.234787] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:01.227 [2024-11-03 04:35:24.234899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:01.227 [2024-11-03 04:35:24.234912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:01.227 [2024-11-03 04:35:24.234920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:01.227 [2024-11-03 04:35:24.234929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:01.227 [2024-11-03 04:35:24.234935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:01.227 [2024-11-03 04:35:24.234943] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:01.227 [2024-11-03 04:35:24.234948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:01.227 [2024-11-03 04:35:24.234956] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:01.227 [2024-11-03 04:35:24.234961] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:01.227 [2024-11-03 04:35:24.234968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.227 [2024-11-03 04:35:24.234974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:01.227 [2024-11-03 04:35:24.234983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:15:01.227 [2024-11-03 04:35:24.234989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.227 [2024-11-03 04:35:24.235060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.227 [2024-11-03 04:35:24.235067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:01.227 [2024-11-03 04:35:24.235074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:01.227 [2024-11-03 04:35:24.235079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.227 [2024-11-03 04:35:24.235178] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:01.227 [2024-11-03 04:35:24.235188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:01.227 [2024-11-03 04:35:24.235195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:01.227 [2024-11-03 04:35:24.235219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:01.227 [2024-11-03 04:35:24.235237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:01.227 [2024-11-03 04:35:24.235249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:01.227 [2024-11-03 04:35:24.235254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:01.227 [2024-11-03 04:35:24.235260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:01.227 [2024-11-03 04:35:24.235265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:01.227 [2024-11-03 04:35:24.235272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:01.227 [2024-11-03 04:35:24.235277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:01.227 [2024-11-03 04:35:24.235292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:01.227 [2024-11-03 04:35:24.235310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:01.227 [2024-11-03 04:35:24.235326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:01.227 [2024-11-03 04:35:24.235343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:01.227 [2024-11-03 04:35:24.235359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:01.227 [2024-11-03 04:35:24.235378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:01.227 [2024-11-03 04:35:24.235389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:01.227 [2024-11-03 04:35:24.235405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:01.227 [2024-11-03 04:35:24.235411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:01.227 [2024-11-03 04:35:24.235417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:01.227 [2024-11-03 04:35:24.235423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:01.227 [2024-11-03 04:35:24.235428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:01.227 [2024-11-03 04:35:24.235440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:01.227 [2024-11-03 04:35:24.235447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235452] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:01.227 [2024-11-03 04:35:24.235459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:01.227 [2024-11-03 04:35:24.235464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:01.227 [2024-11-03 04:35:24.235478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:01.227 [2024-11-03 04:35:24.235486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:01.227 [2024-11-03 04:35:24.235491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:01.227 [2024-11-03 04:35:24.235498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:01.227 [2024-11-03 04:35:24.235503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:01.227 [2024-11-03 04:35:24.235509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:01.227 [2024-11-03 04:35:24.235517] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:01.227 [2024-11-03 04:35:24.235525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:01.227 [2024-11-03 04:35:24.235538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:01.227 [2024-11-03 04:35:24.235543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:01.227 [2024-11-03 04:35:24.235550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:01.227 [2024-11-03 04:35:24.235555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:01.227 [2024-11-03 04:35:24.235574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:01.227 [2024-11-03 04:35:24.235580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:01.227 [2024-11-03 04:35:24.235586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:01.227 [2024-11-03 04:35:24.235592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:01.227 [2024-11-03 04:35:24.235600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:01.227 [2024-11-03 04:35:24.235632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:01.227 [2024-11-03 04:35:24.235639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:01.227 [2024-11-03 04:35:24.235652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:01.227 [2024-11-03 04:35:24.235657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:01.227 [2024-11-03 04:35:24.235665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:01.227 [2024-11-03 04:35:24.235671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:01.227 [2024-11-03 04:35:24.235678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:01.227 [2024-11-03 04:35:24.235685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:15:01.227 [2024-11-03 04:35:24.235692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:01.227 [2024-11-03 04:35:24.235770] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:01.228 [2024-11-03 04:35:24.235780] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:03.757 [2024-11-03 04:35:26.571782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.571834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:03.757 [2024-11-03 04:35:26.571849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2336.003 ms 00:15:03.757 [2024-11-03 04:35:26.571862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.597604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.597643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:03.757 [2024-11-03 04:35:26.597654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.529 ms 00:15:03.757 [2024-11-03 04:35:26.597663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.597783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.597800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:03.757 [2024-11-03 04:35:26.597809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:03.757 [2024-11-03 04:35:26.597819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.642274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.642337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:03.757 [2024-11-03 04:35:26.642359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.404 ms 00:15:03.757 [2024-11-03 04:35:26.642384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.642450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.642469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:03.757 [2024-11-03 04:35:26.642486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:03.757 [2024-11-03 04:35:26.642502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.643026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.643069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:03.757 [2024-11-03 04:35:26.643087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:15:03.757 [2024-11-03 04:35:26.643104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.643326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.643345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:03.757 [2024-11-03 04:35:26.643360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:15:03.757 [2024-11-03 04:35:26.643379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.660329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.660359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:03.757 [2024-11-03 04:35:26.660369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:15:03.757 [2024-11-03 04:35:26.660378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.671871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:03.757 [2024-11-03 04:35:26.686436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.686476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:03.757 [2024-11-03 04:35:26.686488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.967 ms 00:15:03.757 [2024-11-03 04:35:26.686496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.737313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.737347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:03.757 [2024-11-03 04:35:26.737360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.777 ms 00:15:03.757 [2024-11-03 04:35:26.737369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.737548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.737569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:03.757 [2024-11-03 04:35:26.737581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:15:03.757 [2024-11-03 04:35:26.737589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.760618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.760646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:03.757 [2024-11-03 04:35:26.760659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.973 ms 00:15:03.757 [2024-11-03 04:35:26.760669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.782954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.757 [2024-11-03 04:35:26.782980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:03.757 [2024-11-03 04:35:26.782992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.242 ms 00:15:03.757 [2024-11-03 04:35:26.782999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:03.757 [2024-11-03 04:35:26.783575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:03.758 [2024-11-03 04:35:26.783594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:03.758 [2024-11-03 04:35:26.783604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:15:03.758 [2024-11-03 04:35:26.783612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.017 [2024-11-03 04:35:26.847635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.017 [2024-11-03 04:35:26.847667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:04.017 [2024-11-03 04:35:26.847683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.976 ms 00:15:04.017 [2024-11-03 04:35:26.847691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.017 [2024-11-03 04:35:26.872114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.017 [2024-11-03 04:35:26.872154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:04.017 [2024-11-03 04:35:26.872166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.333 ms 00:15:04.017 [2024-11-03 04:35:26.872174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.017 [2024-11-03 04:35:26.894744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.017 [2024-11-03 04:35:26.894771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:04.018 [2024-11-03 04:35:26.894782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.524 ms 00:15:04.018 [2024-11-03 04:35:26.894789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.018 [2024-11-03 04:35:26.917737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.018 [2024-11-03 04:35:26.917764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:04.018 [2024-11-03 04:35:26.917775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:15:04.018 [2024-11-03 04:35:26.917783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.018 [2024-11-03 04:35:26.917831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.018 [2024-11-03 04:35:26.917839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:04.018 [2024-11-03 04:35:26.917851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:04.018 [2024-11-03 04:35:26.917858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.018 [2024-11-03 04:35:26.917939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.018 [2024-11-03 04:35:26.917949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:04.018 [2024-11-03 04:35:26.917958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:04.018 [2024-11-03 04:35:26.917965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.018 [2024-11-03 04:35:26.918965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2694.119 ms, result 0 00:15:04.018 { 00:15:04.018 "name": "ftl0", 00:15:04.018 "uuid": "4d065b61-4e66-40ee-8893-3861651af5f8" 00:15:04.018 } 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:04.018 04:35:26 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.275 04:35:27 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:04.275 [ 00:15:04.275 { 00:15:04.275 "name": "ftl0", 00:15:04.275 "aliases": [ 00:15:04.275 "4d065b61-4e66-40ee-8893-3861651af5f8" 00:15:04.275 ], 00:15:04.275 "product_name": "FTL disk", 00:15:04.275 "block_size": 4096, 00:15:04.275 "num_blocks": 20971520, 00:15:04.275 "uuid": "4d065b61-4e66-40ee-8893-3861651af5f8", 00:15:04.276 "assigned_rate_limits": { 00:15:04.276 "rw_ios_per_sec": 0, 00:15:04.276 "rw_mbytes_per_sec": 0, 00:15:04.276 "r_mbytes_per_sec": 0, 00:15:04.276 "w_mbytes_per_sec": 0 00:15:04.276 }, 00:15:04.276 "claimed": false, 00:15:04.276 "zoned": false, 00:15:04.276 "supported_io_types": { 00:15:04.276 "read": true, 00:15:04.276 "write": true, 00:15:04.276 "unmap": true, 00:15:04.276 "flush": true, 00:15:04.276 "reset": false, 00:15:04.276 "nvme_admin": false, 00:15:04.276 "nvme_io": false, 00:15:04.276 "nvme_io_md": false, 00:15:04.276 "write_zeroes": true, 00:15:04.276 "zcopy": false, 00:15:04.276 "get_zone_info": false, 00:15:04.276 "zone_management": false, 00:15:04.276 "zone_append": false, 00:15:04.276 "compare": false, 00:15:04.276 "compare_and_write": false, 00:15:04.276 "abort": false, 00:15:04.276 "seek_hole": false, 00:15:04.276 "seek_data": false, 00:15:04.276 "copy": false, 00:15:04.276 "nvme_iov_md": false 00:15:04.276 }, 00:15:04.276 "driver_specific": { 00:15:04.276 "ftl": { 00:15:04.276 "base_bdev": "4ef5a551-8fe7-4c11-9ff6-69bec3931f1b", 00:15:04.276 "cache": "nvc0n1p0" 00:15:04.276 } 00:15:04.276 } 00:15:04.276 } 00:15:04.276 ] 00:15:04.276 04:35:27 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:15:04.276 04:35:27 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:04.276 04:35:27 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:04.534 04:35:27 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:04.534 04:35:27 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:04.793 [2024-11-03 04:35:27.712141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.712178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:04.793 [2024-11-03 04:35:27.712189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:04.793 [2024-11-03 04:35:27.712199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.712229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:04.793 [2024-11-03 04:35:27.714849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.714878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:04.793 [2024-11-03 04:35:27.714890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:15:04.793 [2024-11-03 04:35:27.714898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.715366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.715380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:04.793 [2024-11-03 04:35:27.715390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:15:04.793 [2024-11-03 04:35:27.715397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.718649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.718668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:04.793 [2024-11-03 04:35:27.718682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:15:04.793 [2024-11-03 04:35:27.718690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.724866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.724888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:04.793 [2024-11-03 04:35:27.724901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:15:04.793 [2024-11-03 04:35:27.724909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.748326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.748354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:04.793 [2024-11-03 04:35:27.748367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.330 ms 00:15:04.793 [2024-11-03 04:35:27.748375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.763150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.763179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:04.793 [2024-11-03 04:35:27.763193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.714 ms 00:15:04.793 [2024-11-03 04:35:27.763200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.763389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.763399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:04.793 [2024-11-03 04:35:27.763409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:15:04.793 [2024-11-03 04:35:27.763417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.782297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.782320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:04.793 [2024-11-03 04:35:27.782329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.851 ms 00:15:04.793 [2024-11-03 04:35:27.782335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.800072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.800095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:04.793 [2024-11-03 04:35:27.800105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.694 ms 00:15:04.793 [2024-11-03 04:35:27.800110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.793 [2024-11-03 04:35:27.817446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.793 [2024-11-03 04:35:27.817469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:04.794 [2024-11-03 04:35:27.817478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.288 ms 00:15:04.794 [2024-11-03 04:35:27.817483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.794 [2024-11-03 04:35:27.834761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.794 [2024-11-03 04:35:27.834784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:04.794 [2024-11-03 04:35:27.834793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.190 ms 00:15:04.794 [2024-11-03 04:35:27.834799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.794 [2024-11-03 04:35:27.834834] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:04.794 [2024-11-03 04:35:27.834845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.834992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:04.794 [2024-11-03 04:35:27.835403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:04.795 [2024-11-03 04:35:27.835521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:04.795 [2024-11-03 04:35:27.835529] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d065b61-4e66-40ee-8893-3861651af5f8 00:15:04.795 [2024-11-03 04:35:27.835535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:04.795 [2024-11-03 04:35:27.835543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:04.795 [2024-11-03 04:35:27.835549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:04.795 [2024-11-03 04:35:27.835556] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:04.795 [2024-11-03 04:35:27.835571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:04.795 [2024-11-03 04:35:27.835580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:04.795 [2024-11-03 04:35:27.835585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:04.795 [2024-11-03 04:35:27.835592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:04.795 [2024-11-03 04:35:27.835597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:04.795 [2024-11-03 04:35:27.835603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.795 [2024-11-03 04:35:27.835609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:04.795 [2024-11-03 04:35:27.835617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:15:04.795 [2024-11-03 04:35:27.835622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.795 [2024-11-03 04:35:27.845452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.795 [2024-11-03 04:35:27.845473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:04.795 [2024-11-03 04:35:27.845482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.795 ms 00:15:04.795 [2024-11-03 04:35:27.845490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.795 [2024-11-03 04:35:27.845787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.795 [2024-11-03 04:35:27.845794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:04.795 [2024-11-03 04:35:27.845802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:15:04.795 [2024-11-03 04:35:27.845808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.880954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.880978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:05.054 [2024-11-03 04:35:27.880990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.880996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.881050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.881057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:05.054 [2024-11-03 04:35:27.881065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.881071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.881149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.881156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:05.054 [2024-11-03 04:35:27.881164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.881172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.881197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.881203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:05.054 [2024-11-03 04:35:27.881210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.881216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.945386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.945417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:05.054 [2024-11-03 04:35:27.945428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.945437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.994823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.994854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:05.054 [2024-11-03 04:35:27.994865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.994871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.994934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.994942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:05.054 [2024-11-03 04:35:27.994950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.994956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.995037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.995044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:05.054 [2024-11-03 04:35:27.995052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.995058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.995151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.995159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:05.054 [2024-11-03 04:35:27.995166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.995172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.995218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.995227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:05.054 [2024-11-03 04:35:27.995236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.995242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.995283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.995290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:05.054 [2024-11-03 04:35:27.995297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.995303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.995352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.054 [2024-11-03 04:35:27.995360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:05.054 [2024-11-03 04:35:27.995368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.054 [2024-11-03 04:35:27.995374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.054 [2024-11-03 04:35:27.995516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.354 ms, result 0 00:15:05.054 true 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72362 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72362 ']' 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72362 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72362 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:05.054 killing process with pid 72362 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72362' 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72362 00:15:05.054 04:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72362 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:11.623 04:35:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:11.623 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:11.623 fio-3.35 00:15:11.623 Starting 1 thread 00:15:16.902 00:15:16.902 test: (groupid=0, jobs=1): err= 0: pid=72542: Sun Nov 3 04:35:39 2024 00:15:16.902 read: IOPS=883, BW=58.7MiB/s (61.5MB/s)(255MiB/4337msec) 00:15:16.902 slat (nsec): min=3926, max=44879, avg=6643.80, stdev=3092.54 00:15:16.902 clat (usec): min=258, max=3423, avg=513.94, stdev=201.34 00:15:16.902 lat (usec): min=264, max=3428, avg=520.58, stdev=202.26 00:15:16.902 clat percentiles (usec): 00:15:16.902 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:15:16.902 | 30.00th=[ 379], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 529], 00:15:16.902 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 832], 95.00th=[ 914], 00:15:16.902 | 99.00th=[ 1012], 99.50th=[ 1090], 99.90th=[ 1532], 99.95th=[ 2966], 00:15:16.902 | 99.99th=[ 3425] 00:15:16.902 write: IOPS=889, BW=59.1MiB/s (62.0MB/s)(256MiB/4334msec); 0 zone resets 00:15:16.902 slat (nsec): min=14407, max=97784, avg=24792.74, stdev=6973.59 00:15:16.902 clat (usec): min=279, max=1621, avg=568.94, stdev=201.31 00:15:16.902 lat (usec): min=308, max=1639, avg=593.73, stdev=201.75 00:15:16.902 clat percentiles (usec): 00:15:16.902 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 334], 00:15:16.902 | 30.00th=[ 453], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 578], 00:15:16.902 | 70.00th=[ 635], 80.00th=[ 701], 90.00th=[ 914], 95.00th=[ 988], 00:15:16.902 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1287], 99.95th=[ 1336], 00:15:16.902 | 99.99th=[ 1614] 00:15:16.902 bw ( KiB/s): min=44472, max=91936, per=94.92%, avg=57426.00, stdev=15163.46, samples=8 00:15:16.902 iops : min= 654, max= 1352, avg=844.50, stdev=222.99, samples=8 00:15:16.902 lat (usec) : 500=45.38%, 750=39.95%, 1000=11.90% 00:15:16.902 lat (msec) : 2=2.74%, 4=0.03% 00:15:16.902 cpu : usr=99.12%, sys=0.09%, ctx=5, majf=0, minf=1169 00:15:16.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.902 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.902 00:15:16.902 Run status group 0 (all jobs): 00:15:16.902 READ: bw=58.7MiB/s (61.5MB/s), 58.7MiB/s-58.7MiB/s (61.5MB/s-61.5MB/s), io=255MiB (267MB), run=4337-4337msec 00:15:16.902 WRITE: bw=59.1MiB/s (62.0MB/s), 59.1MiB/s-59.1MiB/s (62.0MB/s-62.0MB/s), io=256MiB (269MB), run=4334-4334msec 00:15:17.473 ----------------------------------------------------- 00:15:17.473 Suppressions used: 00:15:17.473 count bytes template 00:15:17.473 1 5 /usr/src/fio/parse.c 00:15:17.473 1 8 libtcmalloc_minimal.so 00:15:17.473 1 904 libcrypto.so 00:15:17.473 ----------------------------------------------------- 00:15:17.473 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:17.473 04:35:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:17.734 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:17.734 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:17.734 fio-3.35 00:15:17.734 Starting 2 threads 00:15:44.295 00:15:44.295 first_half: (groupid=0, jobs=1): err= 0: pid=72645: Sun Nov 3 04:36:05 2024 00:15:44.295 read: IOPS=2804, BW=11.0MiB/s (11.5MB/s)(256MiB/23350msec) 00:15:44.295 slat (nsec): min=2905, max=37070, avg=5456.92, stdev=1555.29 00:15:44.295 clat (usec): min=717, max=421047, avg=38396.71, stdev=28832.41 00:15:44.295 lat (usec): min=721, max=421057, avg=38402.16, stdev=28832.59 00:15:44.295 clat percentiles (msec): 00:15:44.295 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:15:44.295 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:15:44.295 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 44], 95.00th=[ 72], 00:15:44.295 | 99.00th=[ 165], 99.50th=[ 230], 99.90th=[ 347], 99.95th=[ 401], 00:15:44.295 | 99.99th=[ 418] 00:15:44.295 write: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(256MiB/23316msec); 0 zone resets 00:15:44.295 slat (usec): min=3, max=1397, avg= 6.57, stdev= 6.32 00:15:44.295 clat (usec): min=398, max=54352, avg=7213.22, stdev=7699.35 00:15:44.295 lat (usec): min=406, max=54358, avg=7219.79, stdev=7699.61 00:15:44.295 clat percentiles (usec): 00:15:44.295 | 1.00th=[ 750], 5.00th=[ 922], 10.00th=[ 1385], 20.00th=[ 2737], 00:15:44.295 | 30.00th=[ 3523], 40.00th=[ 4228], 50.00th=[ 5014], 60.00th=[ 5604], 00:15:44.295 | 70.00th=[ 6325], 80.00th=[ 9896], 90.00th=[14877], 95.00th=[22676], 00:15:44.295 | 99.00th=[45876], 99.50th=[47449], 99.90th=[51643], 99.95th=[52167], 00:15:44.295 | 99.99th=[54264] 00:15:44.295 bw ( KiB/s): min= 1888, max=53912, per=100.00%, avg=23659.27, stdev=13523.85, samples=22 00:15:44.295 iops : min= 472, max=13478, avg=5914.77, stdev=3380.96, samples=22 00:15:44.295 lat (usec) : 500=0.01%, 750=0.47%, 1000=2.60% 00:15:44.295 lat (msec) : 2=3.76%, 4=11.63%, 10=22.03%, 20=7.69%, 50=48.01% 00:15:44.295 lat (msec) : 100=1.94%, 250=1.66%, 500=0.19% 00:15:44.295 cpu : usr=99.31%, sys=0.13%, ctx=55, majf=0, minf=5530 00:15:44.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:44.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.295 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.295 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.295 second_half: (groupid=0, jobs=1): err= 0: pid=72646: Sun Nov 3 04:36:05 2024 00:15:44.295 read: IOPS=2834, BW=11.1MiB/s (11.6MB/s)(256MiB/23104msec) 00:15:44.295 slat (nsec): min=3075, max=58216, avg=4508.96, stdev=1501.55 00:15:44.295 clat (msec): min=11, max=397, avg=38.43, stdev=23.63 00:15:44.295 lat (msec): min=11, max=397, avg=38.43, stdev=23.63 00:15:44.295 clat percentiles (msec): 00:15:44.295 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:15:44.295 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:15:44.295 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 47], 95.00th=[ 70], 00:15:44.295 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 271], 99.95th=[ 309], 00:15:44.295 | 99.99th=[ 380] 00:15:44.295 write: IOPS=3034, BW=11.9MiB/s (12.4MB/s)(256MiB/21597msec); 0 zone resets 00:15:44.295 slat (usec): min=3, max=1197, avg= 5.81, stdev= 5.96 00:15:44.295 clat (usec): min=383, max=40204, avg=6710.60, stdev=4893.94 00:15:44.295 lat (usec): min=390, max=40209, avg=6716.41, stdev=4894.58 00:15:44.295 clat percentiles (usec): 00:15:44.295 | 1.00th=[ 881], 5.00th=[ 1663], 10.00th=[ 2474], 20.00th=[ 3228], 00:15:44.295 | 30.00th=[ 3949], 40.00th=[ 4621], 50.00th=[ 5145], 60.00th=[ 5538], 00:15:44.295 | 70.00th=[ 6390], 80.00th=[10814], 90.00th=[13829], 95.00th=[16188], 00:15:44.295 | 99.00th=[23462], 99.50th=[25822], 99.90th=[30540], 99.95th=[32113], 00:15:44.295 | 99.99th=[39060] 00:15:44.295 bw ( KiB/s): min= 744, max=47616, per=100.00%, avg=24962.48, stdev=14152.23, samples=21 00:15:44.295 iops : min= 186, max=11904, avg=6240.62, stdev=3538.06, samples=21 00:15:44.295 lat (usec) : 500=0.02%, 750=0.18%, 1000=0.57% 00:15:44.295 lat (msec) : 2=2.39%, 4=12.26%, 10=23.47%, 20=9.90%, 50=46.94% 00:15:44.295 lat (msec) : 100=2.52%, 250=1.68%, 500=0.08% 00:15:44.295 cpu : usr=99.21%, sys=0.14%, ctx=85, majf=0, minf=5585 00:15:44.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:44.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.295 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.295 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.295 00:15:44.295 Run status group 0 (all jobs): 00:15:44.295 READ: bw=21.9MiB/s (23.0MB/s), 11.0MiB/s-11.1MiB/s (11.5MB/s-11.6MB/s), io=512MiB (536MB), run=23104-23350msec 00:15:44.295 WRITE: bw=22.0MiB/s (23.0MB/s), 11.0MiB/s-11.9MiB/s (11.5MB/s-12.4MB/s), io=512MiB (537MB), run=21597-23316msec 00:15:44.867 ----------------------------------------------------- 00:15:44.867 Suppressions used: 00:15:44.867 count bytes template 00:15:44.867 2 10 /usr/src/fio/parse.c 00:15:44.867 3 288 /usr/src/fio/iolog.c 00:15:44.867 1 8 libtcmalloc_minimal.so 00:15:44.867 1 904 libcrypto.so 00:15:44.867 ----------------------------------------------------- 00:15:44.867 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:44.867 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:44.868 04:36:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:45.128 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:45.128 fio-3.35 00:15:45.128 Starting 1 thread 00:16:00.023 00:16:00.023 test: (groupid=0, jobs=1): err= 0: pid=72964: Sun Nov 3 04:36:23 2024 00:16:00.023 read: IOPS=6733, BW=26.3MiB/s (27.6MB/s)(255MiB/9683msec) 00:16:00.023 slat (nsec): min=2935, max=29444, avg=4529.55, stdev=998.59 00:16:00.023 clat (usec): min=742, max=39733, avg=19002.00, stdev=2817.76 00:16:00.023 lat (usec): min=750, max=39737, avg=19006.53, stdev=2817.75 00:16:00.023 clat percentiles (usec): 00:16:00.023 | 1.00th=[15008], 5.00th=[15401], 10.00th=[15664], 20.00th=[16188], 00:16:00.023 | 30.00th=[17171], 40.00th=[18220], 50.00th=[19006], 60.00th=[19530], 00:16:00.023 | 70.00th=[20317], 80.00th=[21103], 90.00th=[22676], 95.00th=[23987], 00:16:00.023 | 99.00th=[26870], 99.50th=[27919], 99.90th=[31589], 99.95th=[35390], 00:16:00.023 | 99.99th=[38536] 00:16:00.023 write: IOPS=15.4k, BW=60.1MiB/s (63.0MB/s)(256MiB/4261msec); 0 zone resets 00:16:00.023 slat (usec): min=3, max=493, avg= 6.48, stdev= 3.14 00:16:00.023 clat (usec): min=459, max=85197, avg=8278.14, stdev=11345.33 00:16:00.023 lat (usec): min=464, max=85204, avg=8284.61, stdev=11345.41 00:16:00.023 clat percentiles (usec): 00:16:00.023 | 1.00th=[ 611], 5.00th=[ 709], 10.00th=[ 799], 20.00th=[ 1020], 00:16:00.023 | 30.00th=[ 1237], 40.00th=[ 2114], 50.00th=[ 4817], 60.00th=[ 5538], 00:16:00.023 | 70.00th=[ 6456], 80.00th=[ 8356], 90.00th=[28967], 95.00th=[34341], 00:16:00.023 | 99.00th=[46400], 99.50th=[53740], 99.90th=[58459], 99.95th=[70779], 00:16:00.023 | 99.99th=[80217] 00:16:00.023 bw ( KiB/s): min=16312, max=89736, per=94.69%, avg=58254.22, stdev=22263.54, samples=9 00:16:00.023 iops : min= 4078, max=22434, avg=14563.56, stdev=5565.88, samples=9 00:16:00.023 lat (usec) : 500=0.01%, 750=3.75%, 1000=5.75% 00:16:00.023 lat (msec) : 2=10.32%, 4=1.74%, 10=20.12%, 20=33.68%, 50=24.26% 00:16:00.023 lat (msec) : 100=0.39% 00:16:00.023 cpu : usr=99.17%, sys=0.14%, ctx=23, majf=0, minf=5566 00:16:00.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:00.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.023 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.023 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.023 00:16:00.023 Run status group 0 (all jobs): 00:16:00.023 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=255MiB (267MB), run=9683-9683msec 00:16:00.023 WRITE: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=256MiB (268MB), run=4261-4261msec 00:16:01.997 ----------------------------------------------------- 00:16:01.997 Suppressions used: 00:16:01.997 count bytes template 00:16:01.997 1 5 /usr/src/fio/parse.c 00:16:01.997 2 192 /usr/src/fio/iolog.c 00:16:01.997 1 8 libtcmalloc_minimal.so 00:16:01.997 1 904 libcrypto.so 00:16:01.997 ----------------------------------------------------- 00:16:01.997 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:01.997 Remove shared memory files 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57168 /dev/shm/spdk_tgt_trace.pid71271 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:01.997 00:16:01.997 real 1m4.381s 00:16:01.997 user 2m15.755s 00:16:01.997 sys 0m2.757s 00:16:01.997 ************************************ 00:16:01.997 END TEST ftl_fio_basic 00:16:01.997 ************************************ 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:01.997 04:36:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:01.997 04:36:24 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:01.997 04:36:24 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:01.997 04:36:24 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:01.997 04:36:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:01.997 ************************************ 00:16:01.997 START TEST ftl_bdevperf 00:16:01.997 ************************************ 00:16:01.997 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:02.258 * Looking for test storage... 00:16:02.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.259 --rc genhtml_branch_coverage=1 00:16:02.259 --rc genhtml_function_coverage=1 00:16:02.259 --rc genhtml_legend=1 00:16:02.259 --rc geninfo_all_blocks=1 00:16:02.259 --rc geninfo_unexecuted_blocks=1 00:16:02.259 00:16:02.259 ' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.259 --rc genhtml_branch_coverage=1 00:16:02.259 --rc genhtml_function_coverage=1 00:16:02.259 --rc genhtml_legend=1 00:16:02.259 --rc geninfo_all_blocks=1 00:16:02.259 --rc geninfo_unexecuted_blocks=1 00:16:02.259 00:16:02.259 ' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.259 --rc genhtml_branch_coverage=1 00:16:02.259 --rc genhtml_function_coverage=1 00:16:02.259 --rc genhtml_legend=1 00:16:02.259 --rc geninfo_all_blocks=1 00:16:02.259 --rc geninfo_unexecuted_blocks=1 00:16:02.259 00:16:02.259 ' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.259 --rc genhtml_branch_coverage=1 00:16:02.259 --rc genhtml_function_coverage=1 00:16:02.259 --rc genhtml_legend=1 00:16:02.259 --rc geninfo_all_blocks=1 00:16:02.259 --rc geninfo_unexecuted_blocks=1 00:16:02.259 00:16:02.259 ' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73204 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73204 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73204 ']' 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:02.259 04:36:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:02.259 [2024-11-03 04:36:25.263457] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:16:02.259 [2024-11-03 04:36:25.263587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73204 ] 00:16:02.521 [2024-11-03 04:36:25.423492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.521 [2024-11-03 04:36:25.522672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:03.093 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:03.355 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:03.615 { 00:16:03.615 "name": "nvme0n1", 00:16:03.615 "aliases": [ 00:16:03.615 "184f7dfe-52c1-4355-9902-41188fc50134" 00:16:03.615 ], 00:16:03.615 "product_name": "NVMe disk", 00:16:03.615 "block_size": 4096, 00:16:03.615 "num_blocks": 1310720, 00:16:03.615 "uuid": "184f7dfe-52c1-4355-9902-41188fc50134", 00:16:03.615 "numa_id": -1, 00:16:03.615 "assigned_rate_limits": { 00:16:03.615 "rw_ios_per_sec": 0, 00:16:03.615 "rw_mbytes_per_sec": 0, 00:16:03.615 "r_mbytes_per_sec": 0, 00:16:03.615 "w_mbytes_per_sec": 0 00:16:03.615 }, 00:16:03.615 "claimed": true, 00:16:03.615 "claim_type": "read_many_write_one", 00:16:03.615 "zoned": false, 00:16:03.615 "supported_io_types": { 00:16:03.615 "read": true, 00:16:03.615 "write": true, 00:16:03.615 "unmap": true, 00:16:03.615 "flush": true, 00:16:03.615 "reset": true, 00:16:03.615 "nvme_admin": true, 00:16:03.615 "nvme_io": true, 00:16:03.615 "nvme_io_md": false, 00:16:03.615 "write_zeroes": true, 00:16:03.615 "zcopy": false, 00:16:03.615 "get_zone_info": false, 00:16:03.615 "zone_management": false, 00:16:03.615 "zone_append": false, 00:16:03.615 "compare": true, 00:16:03.615 "compare_and_write": false, 00:16:03.615 "abort": true, 00:16:03.615 "seek_hole": false, 00:16:03.615 "seek_data": false, 00:16:03.615 "copy": true, 00:16:03.615 "nvme_iov_md": false 00:16:03.615 }, 00:16:03.615 "driver_specific": { 00:16:03.615 "nvme": [ 00:16:03.615 { 00:16:03.615 "pci_address": "0000:00:11.0", 00:16:03.615 "trid": { 00:16:03.615 "trtype": "PCIe", 00:16:03.615 "traddr": "0000:00:11.0" 00:16:03.615 }, 00:16:03.615 "ctrlr_data": { 00:16:03.615 "cntlid": 0, 00:16:03.615 "vendor_id": "0x1b36", 00:16:03.615 "model_number": "QEMU NVMe Ctrl", 00:16:03.615 "serial_number": "12341", 00:16:03.615 "firmware_revision": "8.0.0", 00:16:03.615 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:03.615 "oacs": { 00:16:03.615 "security": 0, 00:16:03.615 "format": 1, 00:16:03.615 "firmware": 0, 00:16:03.615 "ns_manage": 1 00:16:03.615 }, 00:16:03.615 "multi_ctrlr": false, 00:16:03.615 "ana_reporting": false 00:16:03.615 }, 00:16:03.615 "vs": { 00:16:03.615 "nvme_version": "1.4" 00:16:03.615 }, 00:16:03.615 "ns_data": { 00:16:03.615 "id": 1, 00:16:03.615 "can_share": false 00:16:03.615 } 00:16:03.615 } 00:16:03.615 ], 00:16:03.615 "mp_policy": "active_passive" 00:16:03.615 } 00:16:03.615 } 00:16:03.615 ]' 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:03.615 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:03.875 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=85201ff3-43f3-4998-98ad-5b07ff9634e8 00:16:03.875 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:03.875 04:36:26 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85201ff3-43f3-4998-98ad-5b07ff9634e8 00:16:04.136 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:04.396 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9e9e7217-ff78-44ec-afdc-0f34c716da61 00:16:04.396 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9e9e7217-ff78-44ec-afdc-0f34c716da61 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=117451a0-f6dd-4caf-b876-d219131efa09 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 117451a0-f6dd-4caf-b876-d219131efa09 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=117451a0-f6dd-4caf-b876-d219131efa09 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 117451a0-f6dd-4caf-b876-d219131efa09 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=117451a0-f6dd-4caf-b876-d219131efa09 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 117451a0-f6dd-4caf-b876-d219131efa09 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:04.658 { 00:16:04.658 "name": "117451a0-f6dd-4caf-b876-d219131efa09", 00:16:04.658 "aliases": [ 00:16:04.658 "lvs/nvme0n1p0" 00:16:04.658 ], 00:16:04.658 "product_name": "Logical Volume", 00:16:04.658 "block_size": 4096, 00:16:04.658 "num_blocks": 26476544, 00:16:04.658 "uuid": "117451a0-f6dd-4caf-b876-d219131efa09", 00:16:04.658 "assigned_rate_limits": { 00:16:04.658 "rw_ios_per_sec": 0, 00:16:04.658 "rw_mbytes_per_sec": 0, 00:16:04.658 "r_mbytes_per_sec": 0, 00:16:04.658 "w_mbytes_per_sec": 0 00:16:04.658 }, 00:16:04.658 "claimed": false, 00:16:04.658 "zoned": false, 00:16:04.658 "supported_io_types": { 00:16:04.658 "read": true, 00:16:04.658 "write": true, 00:16:04.658 "unmap": true, 00:16:04.658 "flush": false, 00:16:04.658 "reset": true, 00:16:04.658 "nvme_admin": false, 00:16:04.658 "nvme_io": false, 00:16:04.658 "nvme_io_md": false, 00:16:04.658 "write_zeroes": true, 00:16:04.658 "zcopy": false, 00:16:04.658 "get_zone_info": false, 00:16:04.658 "zone_management": false, 00:16:04.658 "zone_append": false, 00:16:04.658 "compare": false, 00:16:04.658 "compare_and_write": false, 00:16:04.658 "abort": false, 00:16:04.658 "seek_hole": true, 00:16:04.658 "seek_data": true, 00:16:04.658 "copy": false, 00:16:04.658 "nvme_iov_md": false 00:16:04.658 }, 00:16:04.658 "driver_specific": { 00:16:04.658 "lvol": { 00:16:04.658 "lvol_store_uuid": "9e9e7217-ff78-44ec-afdc-0f34c716da61", 00:16:04.658 "base_bdev": "nvme0n1", 00:16:04.658 "thin_provision": true, 00:16:04.658 "num_allocated_clusters": 0, 00:16:04.658 "snapshot": false, 00:16:04.658 "clone": false, 00:16:04.658 "esnap_clone": false 00:16:04.658 } 00:16:04.658 } 00:16:04.658 } 00:16:04.658 ]' 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:04.658 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:04.920 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:04.920 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:04.920 04:36:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:04.920 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:04.920 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:04.920 04:36:27 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 117451a0-f6dd-4caf-b876-d219131efa09 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=117451a0-f6dd-4caf-b876-d219131efa09 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 117451a0-f6dd-4caf-b876-d219131efa09 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:05.181 { 00:16:05.181 "name": "117451a0-f6dd-4caf-b876-d219131efa09", 00:16:05.181 "aliases": [ 00:16:05.181 "lvs/nvme0n1p0" 00:16:05.181 ], 00:16:05.181 "product_name": "Logical Volume", 00:16:05.181 "block_size": 4096, 00:16:05.181 "num_blocks": 26476544, 00:16:05.181 "uuid": "117451a0-f6dd-4caf-b876-d219131efa09", 00:16:05.181 "assigned_rate_limits": { 00:16:05.181 "rw_ios_per_sec": 0, 00:16:05.181 "rw_mbytes_per_sec": 0, 00:16:05.181 "r_mbytes_per_sec": 0, 00:16:05.181 "w_mbytes_per_sec": 0 00:16:05.181 }, 00:16:05.181 "claimed": false, 00:16:05.181 "zoned": false, 00:16:05.181 "supported_io_types": { 00:16:05.181 "read": true, 00:16:05.181 "write": true, 00:16:05.181 "unmap": true, 00:16:05.181 "flush": false, 00:16:05.181 "reset": true, 00:16:05.181 "nvme_admin": false, 00:16:05.181 "nvme_io": false, 00:16:05.181 "nvme_io_md": false, 00:16:05.181 "write_zeroes": true, 00:16:05.181 "zcopy": false, 00:16:05.181 "get_zone_info": false, 00:16:05.181 "zone_management": false, 00:16:05.181 "zone_append": false, 00:16:05.181 "compare": false, 00:16:05.181 "compare_and_write": false, 00:16:05.181 "abort": false, 00:16:05.181 "seek_hole": true, 00:16:05.181 "seek_data": true, 00:16:05.181 "copy": false, 00:16:05.181 "nvme_iov_md": false 00:16:05.181 }, 00:16:05.181 "driver_specific": { 00:16:05.181 "lvol": { 00:16:05.181 "lvol_store_uuid": "9e9e7217-ff78-44ec-afdc-0f34c716da61", 00:16:05.181 "base_bdev": "nvme0n1", 00:16:05.181 "thin_provision": true, 00:16:05.181 "num_allocated_clusters": 0, 00:16:05.181 "snapshot": false, 00:16:05.181 "clone": false, 00:16:05.181 "esnap_clone": false 00:16:05.181 } 00:16:05.181 } 00:16:05.181 } 00:16:05.181 ]' 00:16:05.181 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 117451a0-f6dd-4caf-b876-d219131efa09 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=117451a0-f6dd-4caf-b876-d219131efa09 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:05.442 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:05.701 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 117451a0-f6dd-4caf-b876-d219131efa09 00:16:05.701 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:05.701 { 00:16:05.701 "name": "117451a0-f6dd-4caf-b876-d219131efa09", 00:16:05.701 "aliases": [ 00:16:05.701 "lvs/nvme0n1p0" 00:16:05.701 ], 00:16:05.701 "product_name": "Logical Volume", 00:16:05.701 "block_size": 4096, 00:16:05.701 "num_blocks": 26476544, 00:16:05.701 "uuid": "117451a0-f6dd-4caf-b876-d219131efa09", 00:16:05.701 "assigned_rate_limits": { 00:16:05.701 "rw_ios_per_sec": 0, 00:16:05.701 "rw_mbytes_per_sec": 0, 00:16:05.701 "r_mbytes_per_sec": 0, 00:16:05.701 "w_mbytes_per_sec": 0 00:16:05.701 }, 00:16:05.701 "claimed": false, 00:16:05.701 "zoned": false, 00:16:05.701 "supported_io_types": { 00:16:05.701 "read": true, 00:16:05.701 "write": true, 00:16:05.701 "unmap": true, 00:16:05.701 "flush": false, 00:16:05.701 "reset": true, 00:16:05.701 "nvme_admin": false, 00:16:05.701 "nvme_io": false, 00:16:05.701 "nvme_io_md": false, 00:16:05.701 "write_zeroes": true, 00:16:05.701 "zcopy": false, 00:16:05.701 "get_zone_info": false, 00:16:05.701 "zone_management": false, 00:16:05.701 "zone_append": false, 00:16:05.701 "compare": false, 00:16:05.701 "compare_and_write": false, 00:16:05.701 "abort": false, 00:16:05.701 "seek_hole": true, 00:16:05.701 "seek_data": true, 00:16:05.701 "copy": false, 00:16:05.701 "nvme_iov_md": false 00:16:05.701 }, 00:16:05.701 "driver_specific": { 00:16:05.701 "lvol": { 00:16:05.701 "lvol_store_uuid": "9e9e7217-ff78-44ec-afdc-0f34c716da61", 00:16:05.701 "base_bdev": "nvme0n1", 00:16:05.701 "thin_provision": true, 00:16:05.701 "num_allocated_clusters": 0, 00:16:05.701 "snapshot": false, 00:16:05.701 "clone": false, 00:16:05.701 "esnap_clone": false 00:16:05.701 } 00:16:05.701 } 00:16:05.701 } 00:16:05.701 ]' 00:16:05.701 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:05.701 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:05.701 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:05.960 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:05.960 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:05.960 04:36:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:05.960 04:36:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:05.960 04:36:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 117451a0-f6dd-4caf-b876-d219131efa09 -c nvc0n1p0 --l2p_dram_limit 20 00:16:05.960 [2024-11-03 04:36:28.973555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-03 04:36:28.973687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:05.960 [2024-11-03 04:36:28.973702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:05.960 [2024-11-03 04:36:28.973710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-03 04:36:28.973768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-03 04:36:28.973778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:05.960 [2024-11-03 04:36:28.973784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:05.960 [2024-11-03 04:36:28.973793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-03 04:36:28.973806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:05.960 [2024-11-03 04:36:28.974347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:05.960 [2024-11-03 04:36:28.974360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-03 04:36:28.974370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:05.960 [2024-11-03 04:36:28.974376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:16:05.960 [2024-11-03 04:36:28.974383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-03 04:36:28.974434] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c90170e3-9e4f-409f-a26c-3a6dfeea2273 00:16:05.960 [2024-11-03 04:36:28.975364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-03 04:36:28.975389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:05.960 [2024-11-03 04:36:28.975399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:05.960 [2024-11-03 04:36:28.975407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-03 04:36:28.980043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.960 [2024-11-03 04:36:28.980068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:05.961 [2024-11-03 04:36:28.980078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:16:05.961 [2024-11-03 04:36:28.980083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.980147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.961 [2024-11-03 04:36:28.980154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:05.961 [2024-11-03 04:36:28.980167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:05.961 [2024-11-03 04:36:28.980172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.980202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.961 [2024-11-03 04:36:28.980210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:05.961 [2024-11-03 04:36:28.980217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:05.961 [2024-11-03 04:36:28.980223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.980240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:05.961 [2024-11-03 04:36:28.983090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.961 [2024-11-03 04:36:28.983115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:05.961 [2024-11-03 04:36:28.983123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:16:05.961 [2024-11-03 04:36:28.983130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.983153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.961 [2024-11-03 04:36:28.983162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:05.961 [2024-11-03 04:36:28.983168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:05.961 [2024-11-03 04:36:28.983175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.983192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:05.961 [2024-11-03 04:36:28.983298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:05.961 [2024-11-03 04:36:28.983309] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:05.961 [2024-11-03 04:36:28.983319] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:05.961 [2024-11-03 04:36:28.983328] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983336] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983342] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:05.961 [2024-11-03 04:36:28.983350] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:05.961 [2024-11-03 04:36:28.983356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:05.961 [2024-11-03 04:36:28.983362] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:05.961 [2024-11-03 04:36:28.983368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.961 [2024-11-03 04:36:28.983376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:05.961 [2024-11-03 04:36:28.983383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:16:05.961 [2024-11-03 04:36:28.983393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.983455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.961 [2024-11-03 04:36:28.983463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:05.961 [2024-11-03 04:36:28.983469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:05.961 [2024-11-03 04:36:28.983477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.961 [2024-11-03 04:36:28.983545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:05.961 [2024-11-03 04:36:28.983554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:05.961 [2024-11-03 04:36:28.983569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:05.961 [2024-11-03 04:36:28.983591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:05.961 [2024-11-03 04:36:28.983609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:05.961 [2024-11-03 04:36:28.983621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:05.961 [2024-11-03 04:36:28.983629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:05.961 [2024-11-03 04:36:28.983640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:05.961 [2024-11-03 04:36:28.983653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:05.961 [2024-11-03 04:36:28.983658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:05.961 [2024-11-03 04:36:28.983667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:05.961 [2024-11-03 04:36:28.983680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:05.961 [2024-11-03 04:36:28.983697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:05.961 [2024-11-03 04:36:28.983715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:05.961 [2024-11-03 04:36:28.983735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:05.961 [2024-11-03 04:36:28.983753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:05.961 [2024-11-03 04:36:28.983771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:05.961 [2024-11-03 04:36:28.983782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:05.961 [2024-11-03 04:36:28.983789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:05.961 [2024-11-03 04:36:28.983794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:05.961 [2024-11-03 04:36:28.983800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:05.961 [2024-11-03 04:36:28.983806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:05.961 [2024-11-03 04:36:28.983812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:05.961 [2024-11-03 04:36:28.983823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:05.961 [2024-11-03 04:36:28.983827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983834] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:05.961 [2024-11-03 04:36:28.983840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:05.961 [2024-11-03 04:36:28.983847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.961 [2024-11-03 04:36:28.983861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:05.961 [2024-11-03 04:36:28.983866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:05.961 [2024-11-03 04:36:28.983872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:05.961 [2024-11-03 04:36:28.983877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:05.961 [2024-11-03 04:36:28.983884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:05.961 [2024-11-03 04:36:28.983889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:05.961 [2024-11-03 04:36:28.983898] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:05.961 [2024-11-03 04:36:28.983905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:05.961 [2024-11-03 04:36:28.983913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:05.961 [2024-11-03 04:36:28.983921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:05.961 [2024-11-03 04:36:28.983928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:05.961 [2024-11-03 04:36:28.983934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:05.961 [2024-11-03 04:36:28.983940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:05.961 [2024-11-03 04:36:28.983945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:05.961 [2024-11-03 04:36:28.983953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:05.961 [2024-11-03 04:36:28.983959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:05.961 [2024-11-03 04:36:28.983967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:05.962 [2024-11-03 04:36:28.983972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:05.962 [2024-11-03 04:36:28.983978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:05.962 [2024-11-03 04:36:28.983983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:05.962 [2024-11-03 04:36:28.983991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:05.962 [2024-11-03 04:36:28.983997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:05.962 [2024-11-03 04:36:28.984004] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:05.962 [2024-11-03 04:36:28.984010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:05.962 [2024-11-03 04:36:28.984021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:05.962 [2024-11-03 04:36:28.984027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:05.962 [2024-11-03 04:36:28.984034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:05.962 [2024-11-03 04:36:28.984039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:05.962 [2024-11-03 04:36:28.984046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.962 [2024-11-03 04:36:28.984051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:05.962 [2024-11-03 04:36:28.984060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:16:05.962 [2024-11-03 04:36:28.984065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.962 [2024-11-03 04:36:28.984097] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:05.962 [2024-11-03 04:36:28.984105] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:10.166 [2024-11-03 04:36:32.954841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.166 [2024-11-03 04:36:32.955185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:10.166 [2024-11-03 04:36:32.955223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3970.714 ms 00:16:10.166 [2024-11-03 04:36:32.955237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.166 [2024-11-03 04:36:32.987491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.166 [2024-11-03 04:36:32.987552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.166 [2024-11-03 04:36:32.987597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.981 ms 00:16:10.166 [2024-11-03 04:36:32.987607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.166 [2024-11-03 04:36:32.987772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.166 [2024-11-03 04:36:32.987787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:10.166 [2024-11-03 04:36:32.987806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:10.166 [2024-11-03 04:36:32.987815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.166 [2024-11-03 04:36:33.036113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.166 [2024-11-03 04:36:33.036173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.166 [2024-11-03 04:36:33.036192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.259 ms 00:16:10.166 [2024-11-03 04:36:33.036201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.166 [2024-11-03 04:36:33.036246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.166 [2024-11-03 04:36:33.036256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.166 [2024-11-03 04:36:33.036267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:10.166 [2024-11-03 04:36:33.036279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.166 [2024-11-03 04:36:33.036963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.166 [2024-11-03 04:36:33.036993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.167 [2024-11-03 04:36:33.037007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:16:10.167 [2024-11-03 04:36:33.037017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.037144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.037165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.167 [2024-11-03 04:36:33.037179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:16:10.167 [2024-11-03 04:36:33.037189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.053118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.053163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.167 [2024-11-03 04:36:33.053177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.908 ms 00:16:10.167 [2024-11-03 04:36:33.053186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.066360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:10.167 [2024-11-03 04:36:33.073617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.073667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:10.167 [2024-11-03 04:36:33.073678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.341 ms 00:16:10.167 [2024-11-03 04:36:33.073689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.170139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.170214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:10.167 [2024-11-03 04:36:33.170228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.421 ms 00:16:10.167 [2024-11-03 04:36:33.170239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.170447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.170467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:10.167 [2024-11-03 04:36:33.170477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:16:10.167 [2024-11-03 04:36:33.170488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.197007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.197065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:10.167 [2024-11-03 04:36:33.197079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.469 ms 00:16:10.167 [2024-11-03 04:36:33.197091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.221984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.222040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:10.167 [2024-11-03 04:36:33.222053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.845 ms 00:16:10.167 [2024-11-03 04:36:33.222064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.167 [2024-11-03 04:36:33.222722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.167 [2024-11-03 04:36:33.222750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:10.167 [2024-11-03 04:36:33.222761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:16:10.167 [2024-11-03 04:36:33.222772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.427 [2024-11-03 04:36:33.304945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.427 [2024-11-03 04:36:33.305011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:10.427 [2024-11-03 04:36:33.305023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.133 ms 00:16:10.427 [2024-11-03 04:36:33.305034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.427 [2024-11-03 04:36:33.332777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.427 [2024-11-03 04:36:33.332833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:10.427 [2024-11-03 04:36:33.332847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.651 ms 00:16:10.427 [2024-11-03 04:36:33.332859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.427 [2024-11-03 04:36:33.358961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.427 [2024-11-03 04:36:33.359015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:10.427 [2024-11-03 04:36:33.359028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.047 ms 00:16:10.427 [2024-11-03 04:36:33.359039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.427 [2024-11-03 04:36:33.385220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.427 [2024-11-03 04:36:33.385279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:10.427 [2024-11-03 04:36:33.385293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.132 ms 00:16:10.427 [2024-11-03 04:36:33.385304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.428 [2024-11-03 04:36:33.385356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.428 [2024-11-03 04:36:33.385376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:10.428 [2024-11-03 04:36:33.385385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:10.428 [2024-11-03 04:36:33.385396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.428 [2024-11-03 04:36:33.385489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.428 [2024-11-03 04:36:33.385504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:10.428 [2024-11-03 04:36:33.385512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:10.428 [2024-11-03 04:36:33.385523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.428 [2024-11-03 04:36:33.387214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4413.140 ms, result 0 00:16:10.428 { 00:16:10.428 "name": "ftl0", 00:16:10.428 "uuid": "c90170e3-9e4f-409f-a26c-3a6dfeea2273" 00:16:10.428 } 00:16:10.428 04:36:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:10.428 04:36:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:10.428 04:36:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:10.689 04:36:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:10.689 [2024-11-03 04:36:33.714868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:10.689 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:10.689 Zero copy mechanism will not be used. 00:16:10.689 Running I/O for 4 seconds... 00:16:12.649 864.00 IOPS, 57.38 MiB/s [2024-11-03T04:36:37.117Z] 849.50 IOPS, 56.41 MiB/s [2024-11-03T04:36:38.060Z] 959.33 IOPS, 63.71 MiB/s [2024-11-03T04:36:38.060Z] 910.50 IOPS, 60.46 MiB/s 00:16:14.976 Latency(us) 00:16:14.976 [2024-11-03T04:36:38.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.976 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:14.976 ftl0 : 4.00 910.11 60.44 0.00 0.00 1156.29 190.62 3428.04 00:16:14.976 [2024-11-03T04:36:38.060Z] =================================================================================================================== 00:16:14.976 [2024-11-03T04:36:38.060Z] Total : 910.11 60.44 0.00 0.00 1156.29 190.62 3428.04 00:16:14.976 { 00:16:14.976 "results": [ 00:16:14.976 { 00:16:14.976 "job": "ftl0", 00:16:14.976 "core_mask": "0x1", 00:16:14.976 "workload": "randwrite", 00:16:14.976 "status": "finished", 00:16:14.976 "queue_depth": 1, 00:16:14.976 "io_size": 69632, 00:16:14.976 "runtime": 4.002815, 00:16:14.976 "iops": 910.1095104320334, 00:16:14.976 "mibps": 60.43695967712722, 00:16:14.976 "io_failed": 0, 00:16:14.976 "io_timeout": 0, 00:16:14.976 "avg_latency_us": 1156.2863607761988, 00:16:14.976 "min_latency_us": 190.62153846153845, 00:16:14.976 "max_latency_us": 3428.036923076923 00:16:14.976 } 00:16:14.976 ], 00:16:14.976 "core_count": 1 00:16:14.976 } 00:16:14.976 [2024-11-03 04:36:37.727217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:14.976 04:36:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:14.976 [2024-11-03 04:36:37.835308] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:14.976 Running I/O for 4 seconds... 00:16:16.859 5977.00 IOPS, 23.35 MiB/s [2024-11-03T04:36:40.885Z] 5809.50 IOPS, 22.69 MiB/s [2024-11-03T04:36:42.338Z] 5581.00 IOPS, 21.80 MiB/s [2024-11-03T04:36:42.338Z] 5612.00 IOPS, 21.92 MiB/s 00:16:19.254 Latency(us) 00:16:19.254 [2024-11-03T04:36:42.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.254 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.254 ftl0 : 4.03 5595.49 21.86 0.00 0.00 22771.90 258.36 124215.93 00:16:19.254 [2024-11-03T04:36:42.338Z] =================================================================================================================== 00:16:19.254 [2024-11-03T04:36:42.338Z] Total : 5595.49 21.86 0.00 0.00 22771.90 0.00 124215.93 00:16:19.254 { 00:16:19.254 "results": [ 00:16:19.254 { 00:16:19.254 "job": "ftl0", 00:16:19.254 "core_mask": "0x1", 00:16:19.254 "workload": "randwrite", 00:16:19.254 "status": "finished", 00:16:19.254 "queue_depth": 128, 00:16:19.254 "io_size": 4096, 00:16:19.254 "runtime": 4.034681, 00:16:19.254 "iops": 5595.485739764804, 00:16:19.254 "mibps": 21.857366170956265, 00:16:19.254 "io_failed": 0, 00:16:19.254 "io_timeout": 0, 00:16:19.254 "avg_latency_us": 22771.898860600777, 00:16:19.254 "min_latency_us": 258.3630769230769, 00:16:19.254 "max_latency_us": 124215.92615384616 00:16:19.254 } 00:16:19.254 ], 00:16:19.254 "core_count": 1 00:16:19.254 } 00:16:19.254 [2024-11-03 04:36:41.880362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:19.254 04:36:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:19.254 [2024-11-03 04:36:41.988929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:19.254 Running I/O for 4 seconds... 00:16:21.138 4404.00 IOPS, 17.20 MiB/s [2024-11-03T04:36:45.166Z] 4745.00 IOPS, 18.54 MiB/s [2024-11-03T04:36:46.109Z] 4634.33 IOPS, 18.10 MiB/s [2024-11-03T04:36:46.109Z] 4598.00 IOPS, 17.96 MiB/s 00:16:23.025 Latency(us) 00:16:23.025 [2024-11-03T04:36:46.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.025 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.025 Verification LBA range: start 0x0 length 0x1400000 00:16:23.025 ftl0 : 4.02 4612.17 18.02 0.00 0.00 27671.88 281.99 43152.94 00:16:23.025 [2024-11-03T04:36:46.109Z] =================================================================================================================== 00:16:23.025 [2024-11-03T04:36:46.109Z] Total : 4612.17 18.02 0.00 0.00 27671.88 0.00 43152.94 00:16:23.025 [2024-11-03 04:36:46.021121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:23.025 { 00:16:23.025 "results": [ 00:16:23.025 { 00:16:23.025 "job": "ftl0", 00:16:23.025 "core_mask": "0x1", 00:16:23.025 "workload": "verify", 00:16:23.025 "status": "finished", 00:16:23.025 "verify_range": { 00:16:23.025 "start": 0, 00:16:23.025 "length": 20971520 00:16:23.025 }, 00:16:23.025 "queue_depth": 128, 00:16:23.025 "io_size": 4096, 00:16:23.025 "runtime": 4.015466, 00:16:23.025 "iops": 4612.167056077676, 00:16:23.025 "mibps": 18.01627756280342, 00:16:23.025 "io_failed": 0, 00:16:23.025 "io_timeout": 0, 00:16:23.025 "avg_latency_us": 27671.881282605085, 00:16:23.025 "min_latency_us": 281.99384615384616, 00:16:23.025 "max_latency_us": 43152.93538461538 00:16:23.025 } 00:16:23.025 ], 00:16:23.025 "core_count": 1 00:16:23.025 } 00:16:23.025 04:36:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:23.286 [2024-11-03 04:36:46.232007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.286 [2024-11-03 04:36:46.232069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:23.286 [2024-11-03 04:36:46.232083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:23.286 [2024-11-03 04:36:46.232097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.286 [2024-11-03 04:36:46.232119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:23.286 [2024-11-03 04:36:46.235095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.286 [2024-11-03 04:36:46.235139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:23.286 [2024-11-03 04:36:46.235153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.953 ms 00:16:23.286 [2024-11-03 04:36:46.235163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.286 [2024-11-03 04:36:46.238384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.286 [2024-11-03 04:36:46.238432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:23.286 [2024-11-03 04:36:46.238446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:16:23.286 [2024-11-03 04:36:46.238455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.448383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.448448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:23.548 [2024-11-03 04:36:46.448468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 209.901 ms 00:16:23.548 [2024-11-03 04:36:46.448477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.454728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.454770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:23.548 [2024-11-03 04:36:46.454785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:16:23.548 [2024-11-03 04:36:46.454794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.481189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.481238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:23.548 [2024-11-03 04:36:46.481253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.333 ms 00:16:23.548 [2024-11-03 04:36:46.481261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.497939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.497998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:23.548 [2024-11-03 04:36:46.498020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.625 ms 00:16:23.548 [2024-11-03 04:36:46.498032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.498193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.498208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:23.548 [2024-11-03 04:36:46.498224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:23.548 [2024-11-03 04:36:46.498233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.523450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.523495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:23.548 [2024-11-03 04:36:46.523509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.193 ms 00:16:23.548 [2024-11-03 04:36:46.523518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.548467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.548512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:23.548 [2024-11-03 04:36:46.548526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.886 ms 00:16:23.548 [2024-11-03 04:36:46.548534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.573056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.573103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:23.548 [2024-11-03 04:36:46.573117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.438 ms 00:16:23.548 [2024-11-03 04:36:46.573125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.548 [2024-11-03 04:36:46.597374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.548 [2024-11-03 04:36:46.597419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:23.548 [2024-11-03 04:36:46.597436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.162 ms 00:16:23.549 [2024-11-03 04:36:46.597444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.549 [2024-11-03 04:36:46.597490] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:23.549 [2024-11-03 04:36:46.597507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.597993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:23.549 [2024-11-03 04:36:46.598353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:23.550 [2024-11-03 04:36:46.598490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:23.550 [2024-11-03 04:36:46.598502] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c90170e3-9e4f-409f-a26c-3a6dfeea2273 00:16:23.550 [2024-11-03 04:36:46.598510] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:23.550 [2024-11-03 04:36:46.598519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:23.550 [2024-11-03 04:36:46.598526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:23.550 [2024-11-03 04:36:46.598536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:23.550 [2024-11-03 04:36:46.598545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:23.550 [2024-11-03 04:36:46.598555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:23.550 [2024-11-03 04:36:46.598576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:23.550 [2024-11-03 04:36:46.598587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:23.550 [2024-11-03 04:36:46.598594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:23.550 [2024-11-03 04:36:46.598604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.550 [2024-11-03 04:36:46.598612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:23.550 [2024-11-03 04:36:46.598623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:16:23.550 [2024-11-03 04:36:46.598631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.550 [2024-11-03 04:36:46.612631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.550 [2024-11-03 04:36:46.612673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:23.550 [2024-11-03 04:36:46.612689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.959 ms 00:16:23.550 [2024-11-03 04:36:46.612697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.550 [2024-11-03 04:36:46.613115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.550 [2024-11-03 04:36:46.613143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:23.550 [2024-11-03 04:36:46.613155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:16:23.550 [2024-11-03 04:36:46.613163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.652124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.652172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:23.812 [2024-11-03 04:36:46.652188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.652198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.652266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.652274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:23.812 [2024-11-03 04:36:46.652285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.652293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.652389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.652404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:23.812 [2024-11-03 04:36:46.652417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.652425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.652442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.652452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:23.812 [2024-11-03 04:36:46.652462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.652470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.737075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.737130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:23.812 [2024-11-03 04:36:46.737151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.737160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:23.812 [2024-11-03 04:36:46.807179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:23.812 [2024-11-03 04:36:46.807300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:23.812 [2024-11-03 04:36:46.807406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:23.812 [2024-11-03 04:36:46.807538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:23.812 [2024-11-03 04:36:46.807635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:23.812 [2024-11-03 04:36:46.807711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.812 [2024-11-03 04:36:46.807795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:23.812 [2024-11-03 04:36:46.807805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.812 [2024-11-03 04:36:46.807815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.812 [2024-11-03 04:36:46.807960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 575.901 ms, result 0 00:16:23.812 true 00:16:23.812 04:36:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73204 00:16:23.812 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73204 ']' 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73204 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73204 00:16:23.813 killing process with pid 73204 00:16:23.813 Received shutdown signal, test time was about 4.000000 seconds 00:16:23.813 00:16:23.813 Latency(us) 00:16:23.813 [2024-11-03T04:36:46.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.813 [2024-11-03T04:36:46.897Z] =================================================================================================================== 00:16:23.813 [2024-11-03T04:36:46.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73204' 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73204 00:16:23.813 04:36:46 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73204 00:16:24.756 Remove shared memory files 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:24.756 00:16:24.756 real 0m22.641s 00:16:24.756 user 0m25.162s 00:16:24.756 sys 0m0.971s 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:24.756 ************************************ 00:16:24.756 END TEST ftl_bdevperf 00:16:24.756 ************************************ 00:16:24.756 04:36:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:24.757 04:36:47 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:24.757 04:36:47 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:24.757 04:36:47 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:24.757 04:36:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:24.757 ************************************ 00:16:24.757 START TEST ftl_trim 00:16:24.757 ************************************ 00:16:24.757 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:24.757 * Looking for test storage... 00:16:24.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.757 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:24.757 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:16:24.757 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:25.018 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.018 04:36:47 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:25.018 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.018 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:25.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.018 --rc genhtml_branch_coverage=1 00:16:25.018 --rc genhtml_function_coverage=1 00:16:25.018 --rc genhtml_legend=1 00:16:25.018 --rc geninfo_all_blocks=1 00:16:25.018 --rc geninfo_unexecuted_blocks=1 00:16:25.018 00:16:25.018 ' 00:16:25.018 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:25.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.018 --rc genhtml_branch_coverage=1 00:16:25.019 --rc genhtml_function_coverage=1 00:16:25.019 --rc genhtml_legend=1 00:16:25.019 --rc geninfo_all_blocks=1 00:16:25.019 --rc geninfo_unexecuted_blocks=1 00:16:25.019 00:16:25.019 ' 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:25.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.019 --rc genhtml_branch_coverage=1 00:16:25.019 --rc genhtml_function_coverage=1 00:16:25.019 --rc genhtml_legend=1 00:16:25.019 --rc geninfo_all_blocks=1 00:16:25.019 --rc geninfo_unexecuted_blocks=1 00:16:25.019 00:16:25.019 ' 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:25.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.019 --rc genhtml_branch_coverage=1 00:16:25.019 --rc genhtml_function_coverage=1 00:16:25.019 --rc genhtml_legend=1 00:16:25.019 --rc geninfo_all_blocks=1 00:16:25.019 --rc geninfo_unexecuted_blocks=1 00:16:25.019 00:16:25.019 ' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73571 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73571 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73571 ']' 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.019 04:36:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 04:36:47 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:25.019 [2024-11-03 04:36:48.000430] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:16:25.019 [2024-11-03 04:36:48.000605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73571 ] 00:16:25.281 [2024-11-03 04:36:48.169285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.281 [2024-11-03 04:36:48.291659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.281 [2024-11-03 04:36:48.291864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.281 [2024-11-03 04:36:48.291945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.223 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:26.223 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:26.223 04:36:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:26.223 04:36:49 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:26.223 04:36:49 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:26.223 04:36:49 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:26.223 04:36:49 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:26.223 04:36:49 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:26.485 04:36:49 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:26.485 04:36:49 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:26.485 04:36:49 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:26.485 { 00:16:26.485 "name": "nvme0n1", 00:16:26.485 "aliases": [ 00:16:26.485 "720c81f5-2763-4d6e-8f4e-e83a0eb5dafb" 00:16:26.485 ], 00:16:26.485 "product_name": "NVMe disk", 00:16:26.485 "block_size": 4096, 00:16:26.485 "num_blocks": 1310720, 00:16:26.485 "uuid": "720c81f5-2763-4d6e-8f4e-e83a0eb5dafb", 00:16:26.485 "numa_id": -1, 00:16:26.485 "assigned_rate_limits": { 00:16:26.485 "rw_ios_per_sec": 0, 00:16:26.485 "rw_mbytes_per_sec": 0, 00:16:26.485 "r_mbytes_per_sec": 0, 00:16:26.485 "w_mbytes_per_sec": 0 00:16:26.485 }, 00:16:26.485 "claimed": true, 00:16:26.485 "claim_type": "read_many_write_one", 00:16:26.485 "zoned": false, 00:16:26.485 "supported_io_types": { 00:16:26.485 "read": true, 00:16:26.485 "write": true, 00:16:26.485 "unmap": true, 00:16:26.485 "flush": true, 00:16:26.485 "reset": true, 00:16:26.485 "nvme_admin": true, 00:16:26.485 "nvme_io": true, 00:16:26.485 "nvme_io_md": false, 00:16:26.485 "write_zeroes": true, 00:16:26.485 "zcopy": false, 00:16:26.485 "get_zone_info": false, 00:16:26.485 "zone_management": false, 00:16:26.485 "zone_append": false, 00:16:26.485 "compare": true, 00:16:26.485 "compare_and_write": false, 00:16:26.485 "abort": true, 00:16:26.485 "seek_hole": false, 00:16:26.485 "seek_data": false, 00:16:26.485 "copy": true, 00:16:26.485 "nvme_iov_md": false 00:16:26.485 }, 00:16:26.485 "driver_specific": { 00:16:26.485 "nvme": [ 00:16:26.485 { 00:16:26.485 "pci_address": "0000:00:11.0", 00:16:26.485 "trid": { 00:16:26.485 "trtype": "PCIe", 00:16:26.485 "traddr": "0000:00:11.0" 00:16:26.485 }, 00:16:26.485 "ctrlr_data": { 00:16:26.485 "cntlid": 0, 00:16:26.485 "vendor_id": "0x1b36", 00:16:26.485 "model_number": "QEMU NVMe Ctrl", 00:16:26.485 "serial_number": "12341", 00:16:26.485 "firmware_revision": "8.0.0", 00:16:26.485 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:26.485 "oacs": { 00:16:26.485 "security": 0, 00:16:26.485 "format": 1, 00:16:26.485 "firmware": 0, 00:16:26.485 "ns_manage": 1 00:16:26.485 }, 00:16:26.485 "multi_ctrlr": false, 00:16:26.485 "ana_reporting": false 00:16:26.485 }, 00:16:26.485 "vs": { 00:16:26.485 "nvme_version": "1.4" 00:16:26.485 }, 00:16:26.485 "ns_data": { 00:16:26.485 "id": 1, 00:16:26.485 "can_share": false 00:16:26.485 } 00:16:26.485 } 00:16:26.485 ], 00:16:26.485 "mp_policy": "active_passive" 00:16:26.485 } 00:16:26.485 } 00:16:26.485 ]' 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:26.485 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:26.745 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:26.745 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:26.745 04:36:49 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9e9e7217-ff78-44ec-afdc-0f34c716da61 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:26.745 04:36:49 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e9e7217-ff78-44ec-afdc-0f34c716da61 00:16:27.005 04:36:50 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:27.265 04:36:50 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=4bdf8c97-c6b2-413a-833b-2df7ecc846bc 00:16:27.265 04:36:50 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4bdf8c97-c6b2-413a-833b-2df7ecc846bc 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=cdc3496c-2753-477f-8943-dcf993775721 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cdc3496c-2753-477f-8943-dcf993775721 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=cdc3496c-2753-477f-8943-dcf993775721 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:27.526 04:36:50 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size cdc3496c-2753-477f-8943-dcf993775721 00:16:27.526 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=cdc3496c-2753-477f-8943-dcf993775721 00:16:27.526 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:27.526 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:27.526 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:27.526 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdc3496c-2753-477f-8943-dcf993775721 00:16:27.785 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:27.785 { 00:16:27.785 "name": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:27.785 "aliases": [ 00:16:27.785 "lvs/nvme0n1p0" 00:16:27.785 ], 00:16:27.785 "product_name": "Logical Volume", 00:16:27.785 "block_size": 4096, 00:16:27.785 "num_blocks": 26476544, 00:16:27.785 "uuid": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:27.785 "assigned_rate_limits": { 00:16:27.785 "rw_ios_per_sec": 0, 00:16:27.785 "rw_mbytes_per_sec": 0, 00:16:27.785 "r_mbytes_per_sec": 0, 00:16:27.785 "w_mbytes_per_sec": 0 00:16:27.785 }, 00:16:27.785 "claimed": false, 00:16:27.785 "zoned": false, 00:16:27.785 "supported_io_types": { 00:16:27.785 "read": true, 00:16:27.785 "write": true, 00:16:27.785 "unmap": true, 00:16:27.785 "flush": false, 00:16:27.785 "reset": true, 00:16:27.786 "nvme_admin": false, 00:16:27.786 "nvme_io": false, 00:16:27.786 "nvme_io_md": false, 00:16:27.786 "write_zeroes": true, 00:16:27.786 "zcopy": false, 00:16:27.786 "get_zone_info": false, 00:16:27.786 "zone_management": false, 00:16:27.786 "zone_append": false, 00:16:27.786 "compare": false, 00:16:27.786 "compare_and_write": false, 00:16:27.786 "abort": false, 00:16:27.786 "seek_hole": true, 00:16:27.786 "seek_data": true, 00:16:27.786 "copy": false, 00:16:27.786 "nvme_iov_md": false 00:16:27.786 }, 00:16:27.786 "driver_specific": { 00:16:27.786 "lvol": { 00:16:27.786 "lvol_store_uuid": "4bdf8c97-c6b2-413a-833b-2df7ecc846bc", 00:16:27.786 "base_bdev": "nvme0n1", 00:16:27.786 "thin_provision": true, 00:16:27.786 "num_allocated_clusters": 0, 00:16:27.786 "snapshot": false, 00:16:27.786 "clone": false, 00:16:27.786 "esnap_clone": false 00:16:27.786 } 00:16:27.786 } 00:16:27.786 } 00:16:27.786 ]' 00:16:27.786 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:27.786 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:27.786 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:27.786 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:27.786 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:27.786 04:36:50 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:27.786 04:36:50 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:27.786 04:36:50 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:27.786 04:36:50 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:28.044 04:36:51 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:28.044 04:36:51 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:28.044 04:36:51 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size cdc3496c-2753-477f-8943-dcf993775721 00:16:28.044 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=cdc3496c-2753-477f-8943-dcf993775721 00:16:28.044 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:28.044 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:28.044 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:28.044 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdc3496c-2753-477f-8943-dcf993775721 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:28.302 { 00:16:28.302 "name": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:28.302 "aliases": [ 00:16:28.302 "lvs/nvme0n1p0" 00:16:28.302 ], 00:16:28.302 "product_name": "Logical Volume", 00:16:28.302 "block_size": 4096, 00:16:28.302 "num_blocks": 26476544, 00:16:28.302 "uuid": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:28.302 "assigned_rate_limits": { 00:16:28.302 "rw_ios_per_sec": 0, 00:16:28.302 "rw_mbytes_per_sec": 0, 00:16:28.302 "r_mbytes_per_sec": 0, 00:16:28.302 "w_mbytes_per_sec": 0 00:16:28.302 }, 00:16:28.302 "claimed": false, 00:16:28.302 "zoned": false, 00:16:28.302 "supported_io_types": { 00:16:28.302 "read": true, 00:16:28.302 "write": true, 00:16:28.302 "unmap": true, 00:16:28.302 "flush": false, 00:16:28.302 "reset": true, 00:16:28.302 "nvme_admin": false, 00:16:28.302 "nvme_io": false, 00:16:28.302 "nvme_io_md": false, 00:16:28.302 "write_zeroes": true, 00:16:28.302 "zcopy": false, 00:16:28.302 "get_zone_info": false, 00:16:28.302 "zone_management": false, 00:16:28.302 "zone_append": false, 00:16:28.302 "compare": false, 00:16:28.302 "compare_and_write": false, 00:16:28.302 "abort": false, 00:16:28.302 "seek_hole": true, 00:16:28.302 "seek_data": true, 00:16:28.302 "copy": false, 00:16:28.302 "nvme_iov_md": false 00:16:28.302 }, 00:16:28.302 "driver_specific": { 00:16:28.302 "lvol": { 00:16:28.302 "lvol_store_uuid": "4bdf8c97-c6b2-413a-833b-2df7ecc846bc", 00:16:28.302 "base_bdev": "nvme0n1", 00:16:28.302 "thin_provision": true, 00:16:28.302 "num_allocated_clusters": 0, 00:16:28.302 "snapshot": false, 00:16:28.302 "clone": false, 00:16:28.302 "esnap_clone": false 00:16:28.302 } 00:16:28.302 } 00:16:28.302 } 00:16:28.302 ]' 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:28.302 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:28.302 04:36:51 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:28.302 04:36:51 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:28.560 04:36:51 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:28.560 04:36:51 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:28.560 04:36:51 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size cdc3496c-2753-477f-8943-dcf993775721 00:16:28.560 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=cdc3496c-2753-477f-8943-dcf993775721 00:16:28.560 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:28.560 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:28.560 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:28.560 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdc3496c-2753-477f-8943-dcf993775721 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:28.818 { 00:16:28.818 "name": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:28.818 "aliases": [ 00:16:28.818 "lvs/nvme0n1p0" 00:16:28.818 ], 00:16:28.818 "product_name": "Logical Volume", 00:16:28.818 "block_size": 4096, 00:16:28.818 "num_blocks": 26476544, 00:16:28.818 "uuid": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:28.818 "assigned_rate_limits": { 00:16:28.818 "rw_ios_per_sec": 0, 00:16:28.818 "rw_mbytes_per_sec": 0, 00:16:28.818 "r_mbytes_per_sec": 0, 00:16:28.818 "w_mbytes_per_sec": 0 00:16:28.818 }, 00:16:28.818 "claimed": false, 00:16:28.818 "zoned": false, 00:16:28.818 "supported_io_types": { 00:16:28.818 "read": true, 00:16:28.818 "write": true, 00:16:28.818 "unmap": true, 00:16:28.818 "flush": false, 00:16:28.818 "reset": true, 00:16:28.818 "nvme_admin": false, 00:16:28.818 "nvme_io": false, 00:16:28.818 "nvme_io_md": false, 00:16:28.818 "write_zeroes": true, 00:16:28.818 "zcopy": false, 00:16:28.818 "get_zone_info": false, 00:16:28.818 "zone_management": false, 00:16:28.818 "zone_append": false, 00:16:28.818 "compare": false, 00:16:28.818 "compare_and_write": false, 00:16:28.818 "abort": false, 00:16:28.818 "seek_hole": true, 00:16:28.818 "seek_data": true, 00:16:28.818 "copy": false, 00:16:28.818 "nvme_iov_md": false 00:16:28.818 }, 00:16:28.818 "driver_specific": { 00:16:28.818 "lvol": { 00:16:28.818 "lvol_store_uuid": "4bdf8c97-c6b2-413a-833b-2df7ecc846bc", 00:16:28.818 "base_bdev": "nvme0n1", 00:16:28.818 "thin_provision": true, 00:16:28.818 "num_allocated_clusters": 0, 00:16:28.818 "snapshot": false, 00:16:28.818 "clone": false, 00:16:28.818 "esnap_clone": false 00:16:28.818 } 00:16:28.818 } 00:16:28.818 } 00:16:28.818 ]' 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:28.818 04:36:51 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:16:28.818 04:36:51 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:28.818 04:36:51 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cdc3496c-2753-477f-8943-dcf993775721 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:29.077 [2024-11-03 04:36:52.000347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.000382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:29.077 [2024-11-03 04:36:52.000395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:29.077 [2024-11-03 04:36:52.000401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.002590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.002621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.077 [2024-11-03 04:36:52.002631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:16:29.077 [2024-11-03 04:36:52.002638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.002711] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:29.077 [2024-11-03 04:36:52.003217] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:29.077 [2024-11-03 04:36:52.003243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.003250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.077 [2024-11-03 04:36:52.003258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:16:29.077 [2024-11-03 04:36:52.003264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.003401] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:16:29.077 [2024-11-03 04:36:52.004370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.004400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:29.077 [2024-11-03 04:36:52.004407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:29.077 [2024-11-03 04:36:52.004415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.009523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.009549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.077 [2024-11-03 04:36:52.009556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:16:29.077 [2024-11-03 04:36:52.009573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.009674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.009684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.077 [2024-11-03 04:36:52.009691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:29.077 [2024-11-03 04:36:52.009701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.009733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.009740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:29.077 [2024-11-03 04:36:52.009747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:29.077 [2024-11-03 04:36:52.009754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.009783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:29.077 [2024-11-03 04:36:52.012640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.012666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.077 [2024-11-03 04:36:52.012675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:16:29.077 [2024-11-03 04:36:52.012681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.012721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.012728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:29.077 [2024-11-03 04:36:52.012735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:29.077 [2024-11-03 04:36:52.012751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.012790] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:29.077 [2024-11-03 04:36:52.012895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:29.077 [2024-11-03 04:36:52.012907] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:29.077 [2024-11-03 04:36:52.012916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:29.077 [2024-11-03 04:36:52.012925] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:29.077 [2024-11-03 04:36:52.012932] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:29.077 [2024-11-03 04:36:52.012940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:29.077 [2024-11-03 04:36:52.012946] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:29.077 [2024-11-03 04:36:52.012953] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:29.077 [2024-11-03 04:36:52.012958] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:29.077 [2024-11-03 04:36:52.012965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.012972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:29.077 [2024-11-03 04:36:52.012980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:16:29.077 [2024-11-03 04:36:52.012985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.013065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.077 [2024-11-03 04:36:52.013072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:29.077 [2024-11-03 04:36:52.013080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:29.077 [2024-11-03 04:36:52.013085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.077 [2024-11-03 04:36:52.013180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:29.077 [2024-11-03 04:36:52.013188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:29.077 [2024-11-03 04:36:52.013197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:29.077 [2024-11-03 04:36:52.013216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:29.077 [2024-11-03 04:36:52.013233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.077 [2024-11-03 04:36:52.013246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:29.077 [2024-11-03 04:36:52.013251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:29.077 [2024-11-03 04:36:52.013257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.077 [2024-11-03 04:36:52.013262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:29.077 [2024-11-03 04:36:52.013269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:29.077 [2024-11-03 04:36:52.013274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:29.077 [2024-11-03 04:36:52.013287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:29.077 [2024-11-03 04:36:52.013307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:29.077 [2024-11-03 04:36:52.013324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:29.077 [2024-11-03 04:36:52.013342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:29.077 [2024-11-03 04:36:52.013358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.077 [2024-11-03 04:36:52.013369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:29.077 [2024-11-03 04:36:52.013377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.077 [2024-11-03 04:36:52.013388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:29.077 [2024-11-03 04:36:52.013392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:29.077 [2024-11-03 04:36:52.013398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.077 [2024-11-03 04:36:52.013404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:29.077 [2024-11-03 04:36:52.013410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:29.077 [2024-11-03 04:36:52.013415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:29.077 [2024-11-03 04:36:52.013427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:29.077 [2024-11-03 04:36:52.013432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.077 [2024-11-03 04:36:52.013437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:29.078 [2024-11-03 04:36:52.013444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:29.078 [2024-11-03 04:36:52.013449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.078 [2024-11-03 04:36:52.013457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.078 [2024-11-03 04:36:52.013462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:29.078 [2024-11-03 04:36:52.013471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:29.078 [2024-11-03 04:36:52.013476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:29.078 [2024-11-03 04:36:52.013482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:29.078 [2024-11-03 04:36:52.013487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:29.078 [2024-11-03 04:36:52.013495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:29.078 [2024-11-03 04:36:52.013503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:29.078 [2024-11-03 04:36:52.013511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:29.078 [2024-11-03 04:36:52.013525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:29.078 [2024-11-03 04:36:52.013530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:29.078 [2024-11-03 04:36:52.013537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:29.078 [2024-11-03 04:36:52.013542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:29.078 [2024-11-03 04:36:52.013548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:29.078 [2024-11-03 04:36:52.013554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:29.078 [2024-11-03 04:36:52.013572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:29.078 [2024-11-03 04:36:52.013577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:29.078 [2024-11-03 04:36:52.013585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:29.078 [2024-11-03 04:36:52.013615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:29.078 [2024-11-03 04:36:52.013622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:29.078 [2024-11-03 04:36:52.013636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:29.078 [2024-11-03 04:36:52.013643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:29.078 [2024-11-03 04:36:52.013650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:29.078 [2024-11-03 04:36:52.013657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.078 [2024-11-03 04:36:52.013667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:29.078 [2024-11-03 04:36:52.013673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:16:29.078 [2024-11-03 04:36:52.013679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.078 [2024-11-03 04:36:52.013772] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:29.078 [2024-11-03 04:36:52.013783] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:31.606 [2024-11-03 04:36:54.334510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.334578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:31.606 [2024-11-03 04:36:54.334596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2320.729 ms 00:16:31.606 [2024-11-03 04:36:54.334606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.360027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.360069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.606 [2024-11-03 04:36:54.360082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.182 ms 00:16:31.606 [2024-11-03 04:36:54.360091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.360220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.360232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:31.606 [2024-11-03 04:36:54.360241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:31.606 [2024-11-03 04:36:54.360252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.398376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.398419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.606 [2024-11-03 04:36:54.398433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.075 ms 00:16:31.606 [2024-11-03 04:36:54.398444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.398539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.398553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.606 [2024-11-03 04:36:54.398576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.606 [2024-11-03 04:36:54.398585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.398905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.398924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.606 [2024-11-03 04:36:54.398933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:31.606 [2024-11-03 04:36:54.398942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.399056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.606 [2024-11-03 04:36:54.399067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.606 [2024-11-03 04:36:54.399075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:31.606 [2024-11-03 04:36:54.399086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.606 [2024-11-03 04:36:54.415668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.415700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.607 [2024-11-03 04:36:54.415710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.531 ms 00:16:31.607 [2024-11-03 04:36:54.415720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.427070] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:31.607 [2024-11-03 04:36:54.441513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.441543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.607 [2024-11-03 04:36:54.441555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:16:31.607 [2024-11-03 04:36:54.441574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.508104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.508154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:31.607 [2024-11-03 04:36:54.508168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.459 ms 00:16:31.607 [2024-11-03 04:36:54.508178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.508378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.508390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.607 [2024-11-03 04:36:54.508402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:16:31.607 [2024-11-03 04:36:54.508409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.531360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.531391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:31.607 [2024-11-03 04:36:54.531407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:16:31.607 [2024-11-03 04:36:54.531415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.553531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.553570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:31.607 [2024-11-03 04:36:54.553583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.067 ms 00:16:31.607 [2024-11-03 04:36:54.553590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.554153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.554170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.607 [2024-11-03 04:36:54.554181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:16:31.607 [2024-11-03 04:36:54.554188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.627132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.627172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:31.607 [2024-11-03 04:36:54.627190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.905 ms 00:16:31.607 [2024-11-03 04:36:54.627201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.651097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.651132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:31.607 [2024-11-03 04:36:54.651146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.796 ms 00:16:31.607 [2024-11-03 04:36:54.651154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.607 [2024-11-03 04:36:54.673739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.607 [2024-11-03 04:36:54.673771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:31.607 [2024-11-03 04:36:54.673783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.525 ms 00:16:31.607 [2024-11-03 04:36:54.673791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.864 [2024-11-03 04:36:54.697010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.864 [2024-11-03 04:36:54.697041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.864 [2024-11-03 04:36:54.697053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:16:31.864 [2024-11-03 04:36:54.697072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.864 [2024-11-03 04:36:54.697136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.864 [2024-11-03 04:36:54.697146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.864 [2024-11-03 04:36:54.697159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:31.864 [2024-11-03 04:36:54.697168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.864 [2024-11-03 04:36:54.697246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.864 [2024-11-03 04:36:54.697255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.864 [2024-11-03 04:36:54.697264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:31.864 [2024-11-03 04:36:54.697271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.864 [2024-11-03 04:36:54.698066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.864 [2024-11-03 04:36:54.701058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2697.413 ms, result 0 00:16:31.864 [2024-11-03 04:36:54.701825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.864 { 00:16:31.864 "name": "ftl0", 00:16:31.864 "uuid": "7c2ce644-fdde-421c-ba2f-76d9ec869ae4" 00:16:31.864 } 00:16:31.864 04:36:54 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.864 04:36:54 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:32.122 [ 00:16:32.122 { 00:16:32.122 "name": "ftl0", 00:16:32.122 "aliases": [ 00:16:32.122 "7c2ce644-fdde-421c-ba2f-76d9ec869ae4" 00:16:32.122 ], 00:16:32.122 "product_name": "FTL disk", 00:16:32.122 "block_size": 4096, 00:16:32.122 "num_blocks": 23592960, 00:16:32.122 "uuid": "7c2ce644-fdde-421c-ba2f-76d9ec869ae4", 00:16:32.122 "assigned_rate_limits": { 00:16:32.122 "rw_ios_per_sec": 0, 00:16:32.122 "rw_mbytes_per_sec": 0, 00:16:32.122 "r_mbytes_per_sec": 0, 00:16:32.122 "w_mbytes_per_sec": 0 00:16:32.122 }, 00:16:32.122 "claimed": false, 00:16:32.122 "zoned": false, 00:16:32.122 "supported_io_types": { 00:16:32.122 "read": true, 00:16:32.122 "write": true, 00:16:32.122 "unmap": true, 00:16:32.122 "flush": true, 00:16:32.122 "reset": false, 00:16:32.122 "nvme_admin": false, 00:16:32.122 "nvme_io": false, 00:16:32.122 "nvme_io_md": false, 00:16:32.122 "write_zeroes": true, 00:16:32.122 "zcopy": false, 00:16:32.122 "get_zone_info": false, 00:16:32.122 "zone_management": false, 00:16:32.122 "zone_append": false, 00:16:32.122 "compare": false, 00:16:32.122 "compare_and_write": false, 00:16:32.122 "abort": false, 00:16:32.122 "seek_hole": false, 00:16:32.122 "seek_data": false, 00:16:32.122 "copy": false, 00:16:32.122 "nvme_iov_md": false 00:16:32.122 }, 00:16:32.122 "driver_specific": { 00:16:32.122 "ftl": { 00:16:32.122 "base_bdev": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:32.122 "cache": "nvc0n1p0" 00:16:32.122 } 00:16:32.122 } 00:16:32.122 } 00:16:32.122 ] 00:16:32.122 04:36:55 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:16:32.122 04:36:55 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:32.122 04:36:55 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:32.380 04:36:55 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:32.380 04:36:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:32.638 04:36:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:32.638 { 00:16:32.638 "name": "ftl0", 00:16:32.638 "aliases": [ 00:16:32.638 "7c2ce644-fdde-421c-ba2f-76d9ec869ae4" 00:16:32.638 ], 00:16:32.638 "product_name": "FTL disk", 00:16:32.638 "block_size": 4096, 00:16:32.638 "num_blocks": 23592960, 00:16:32.638 "uuid": "7c2ce644-fdde-421c-ba2f-76d9ec869ae4", 00:16:32.638 "assigned_rate_limits": { 00:16:32.638 "rw_ios_per_sec": 0, 00:16:32.638 "rw_mbytes_per_sec": 0, 00:16:32.638 "r_mbytes_per_sec": 0, 00:16:32.638 "w_mbytes_per_sec": 0 00:16:32.638 }, 00:16:32.638 "claimed": false, 00:16:32.638 "zoned": false, 00:16:32.638 "supported_io_types": { 00:16:32.638 "read": true, 00:16:32.638 "write": true, 00:16:32.638 "unmap": true, 00:16:32.638 "flush": true, 00:16:32.638 "reset": false, 00:16:32.638 "nvme_admin": false, 00:16:32.638 "nvme_io": false, 00:16:32.638 "nvme_io_md": false, 00:16:32.638 "write_zeroes": true, 00:16:32.638 "zcopy": false, 00:16:32.638 "get_zone_info": false, 00:16:32.638 "zone_management": false, 00:16:32.638 "zone_append": false, 00:16:32.638 "compare": false, 00:16:32.638 "compare_and_write": false, 00:16:32.638 "abort": false, 00:16:32.638 "seek_hole": false, 00:16:32.638 "seek_data": false, 00:16:32.638 "copy": false, 00:16:32.638 "nvme_iov_md": false 00:16:32.638 }, 00:16:32.638 "driver_specific": { 00:16:32.638 "ftl": { 00:16:32.638 "base_bdev": "cdc3496c-2753-477f-8943-dcf993775721", 00:16:32.638 "cache": "nvc0n1p0" 00:16:32.638 } 00:16:32.638 } 00:16:32.638 } 00:16:32.638 ]' 00:16:32.638 04:36:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:32.638 04:36:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:32.638 04:36:55 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:32.898 [2024-11-03 04:36:55.753621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.753660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:32.898 [2024-11-03 04:36:55.753672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:32.898 [2024-11-03 04:36:55.753681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.753720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:32.898 [2024-11-03 04:36:55.756315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.756344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:32.898 [2024-11-03 04:36:55.756359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:16:32.898 [2024-11-03 04:36:55.756367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.756953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.756973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:32.898 [2024-11-03 04:36:55.756985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:16:32.898 [2024-11-03 04:36:55.756992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.760641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.760662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.898 [2024-11-03 04:36:55.760673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:16:32.898 [2024-11-03 04:36:55.760684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.767631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.767659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:32.898 [2024-11-03 04:36:55.767671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:16:32.898 [2024-11-03 04:36:55.767680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.790712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.790742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.898 [2024-11-03 04:36:55.790756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.936 ms 00:16:32.898 [2024-11-03 04:36:55.790764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.805884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.806021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.898 [2024-11-03 04:36:55.806041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.051 ms 00:16:32.898 [2024-11-03 04:36:55.806050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.806276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.806290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.898 [2024-11-03 04:36:55.806300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:32.898 [2024-11-03 04:36:55.806307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.829147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.829254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:32.898 [2024-11-03 04:36:55.829273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.806 ms 00:16:32.898 [2024-11-03 04:36:55.829280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.852069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.852165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:32.898 [2024-11-03 04:36:55.852223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.730 ms 00:16:32.898 [2024-11-03 04:36:55.852245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.874425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.874525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.898 [2024-11-03 04:36:55.874595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.114 ms 00:16:32.898 [2024-11-03 04:36:55.874618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.896928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.898 [2024-11-03 04:36:55.897024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.898 [2024-11-03 04:36:55.897078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.099 ms 00:16:32.898 [2024-11-03 04:36:55.897100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.898 [2024-11-03 04:36:55.897170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.898 [2024-11-03 04:36:55.897224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.897999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.898 [2024-11-03 04:36:55.898841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.898871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.898903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.898963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.898996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.899981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.900975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.899 [2024-11-03 04:36:55.901397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.899 [2024-11-03 04:36:55.901408] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:16:32.899 [2024-11-03 04:36:55.901415] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.899 [2024-11-03 04:36:55.901424] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.899 [2024-11-03 04:36:55.901431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.899 [2024-11-03 04:36:55.901441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.899 [2024-11-03 04:36:55.901447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.899 [2024-11-03 04:36:55.901456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.899 [2024-11-03 04:36:55.901465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.899 [2024-11-03 04:36:55.901473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.899 [2024-11-03 04:36:55.901479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.899 [2024-11-03 04:36:55.901488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.899 [2024-11-03 04:36:55.901495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.899 [2024-11-03 04:36:55.901505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.320 ms 00:16:32.899 [2024-11-03 04:36:55.901512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.899 [2024-11-03 04:36:55.913916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.899 [2024-11-03 04:36:55.914008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.899 [2024-11-03 04:36:55.914058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.252 ms 00:16:32.899 [2024-11-03 04:36:55.914080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.899 [2024-11-03 04:36:55.914496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.899 [2024-11-03 04:36:55.914592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.899 [2024-11-03 04:36:55.914608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:32.899 [2024-11-03 04:36:55.914616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.899 [2024-11-03 04:36:55.958675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.899 [2024-11-03 04:36:55.958777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.899 [2024-11-03 04:36:55.958830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.899 [2024-11-03 04:36:55.958854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.899 [2024-11-03 04:36:55.958956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.899 [2024-11-03 04:36:55.959024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.900 [2024-11-03 04:36:55.959100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.900 [2024-11-03 04:36:55.959124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.900 [2024-11-03 04:36:55.959202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.900 [2024-11-03 04:36:55.959281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.900 [2024-11-03 04:36:55.959315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.900 [2024-11-03 04:36:55.959335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.900 [2024-11-03 04:36:55.959384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.900 [2024-11-03 04:36:55.959405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.900 [2024-11-03 04:36:55.959426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.900 [2024-11-03 04:36:55.959495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.040108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.040241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.158 [2024-11-03 04:36:56.040291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.040314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.103408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.103516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.158 [2024-11-03 04:36:56.103579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.103624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.103722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.103774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.158 [2024-11-03 04:36:56.103814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.103863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.103938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.103961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.158 [2024-11-03 04:36:56.104007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.104078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.104215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.104334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.158 [2024-11-03 04:36:56.104361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.104380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.104461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.104486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:33.158 [2024-11-03 04:36:56.104510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.104530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.104644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.104698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.158 [2024-11-03 04:36:56.104754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.104785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.104915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:33.158 [2024-11-03 04:36:56.104943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.158 [2024-11-03 04:36:56.104964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:33.158 [2024-11-03 04:36:56.104982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.158 [2024-11-03 04:36:56.105173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 351.537 ms, result 0 00:16:33.158 true 00:16:33.158 04:36:56 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73571 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73571 ']' 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73571 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73571 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73571' 00:16:33.158 killing process with pid 73571 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73571 00:16:33.158 04:36:56 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73571 00:16:39.754 04:37:02 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:40.696 65536+0 records in 00:16:40.696 65536+0 records out 00:16:40.696 268435456 bytes (268 MB, 256 MiB) copied, 1.11057 s, 242 MB/s 00:16:40.696 04:37:03 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.696 [2024-11-03 04:37:03.582872] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:16:40.696 [2024-11-03 04:37:03.583024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73752 ] 00:16:40.696 [2024-11-03 04:37:03.747549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.957 [2024-11-03 04:37:03.848522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.218 [2024-11-03 04:37:04.138026] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:41.218 [2024-11-03 04:37:04.138354] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:41.218 [2024-11-03 04:37:04.296271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.218 [2024-11-03 04:37:04.296440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:41.218 [2024-11-03 04:37:04.296460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:41.218 [2024-11-03 04:37:04.296470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.218 [2024-11-03 04:37:04.299139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.218 [2024-11-03 04:37:04.299174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.218 [2024-11-03 04:37:04.299185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:16:41.218 [2024-11-03 04:37:04.299192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.218 [2024-11-03 04:37:04.299263] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:41.218 [2024-11-03 04:37:04.300091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:41.218 [2024-11-03 04:37:04.300273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.218 [2024-11-03 04:37:04.300330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.218 [2024-11-03 04:37:04.300354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:16:41.218 [2024-11-03 04:37:04.300373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.301591] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:41.481 [2024-11-03 04:37:04.314668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.314786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:41.481 [2024-11-03 04:37:04.314807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.078 ms 00:16:41.481 [2024-11-03 04:37:04.314815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.314895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.314906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:41.481 [2024-11-03 04:37:04.314915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:41.481 [2024-11-03 04:37:04.314922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.320057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.320087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.481 [2024-11-03 04:37:04.320096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.094 ms 00:16:41.481 [2024-11-03 04:37:04.320104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.320188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.320197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.481 [2024-11-03 04:37:04.320205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:41.481 [2024-11-03 04:37:04.320214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.320239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.320250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:41.481 [2024-11-03 04:37:04.320258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:41.481 [2024-11-03 04:37:04.320266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.320285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:41.481 [2024-11-03 04:37:04.323571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.323598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.481 [2024-11-03 04:37:04.323609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:16:41.481 [2024-11-03 04:37:04.323616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.323650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.323659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:41.481 [2024-11-03 04:37:04.323668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:41.481 [2024-11-03 04:37:04.323676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.323693] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:41.481 [2024-11-03 04:37:04.323714] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:41.481 [2024-11-03 04:37:04.323748] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:41.481 [2024-11-03 04:37:04.323763] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:41.481 [2024-11-03 04:37:04.323866] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:41.481 [2024-11-03 04:37:04.323878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:41.481 [2024-11-03 04:37:04.323888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:41.481 [2024-11-03 04:37:04.323899] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:41.481 [2024-11-03 04:37:04.323912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:41.481 [2024-11-03 04:37:04.323920] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:41.481 [2024-11-03 04:37:04.323928] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:41.481 [2024-11-03 04:37:04.323936] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:41.481 [2024-11-03 04:37:04.323943] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:41.481 [2024-11-03 04:37:04.323950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.323958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:41.481 [2024-11-03 04:37:04.323966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:16:41.481 [2024-11-03 04:37:04.323973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.324060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.481 [2024-11-03 04:37:04.324069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:41.481 [2024-11-03 04:37:04.324078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:41.481 [2024-11-03 04:37:04.324086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.481 [2024-11-03 04:37:04.324182] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:41.481 [2024-11-03 04:37:04.324193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:41.481 [2024-11-03 04:37:04.324201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:41.481 [2024-11-03 04:37:04.324225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:41.481 [2024-11-03 04:37:04.324248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.481 [2024-11-03 04:37:04.324262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:41.481 [2024-11-03 04:37:04.324268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:41.481 [2024-11-03 04:37:04.324275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.481 [2024-11-03 04:37:04.324289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:41.481 [2024-11-03 04:37:04.324296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:41.481 [2024-11-03 04:37:04.324302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:41.481 [2024-11-03 04:37:04.324316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:41.481 [2024-11-03 04:37:04.324337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:41.481 [2024-11-03 04:37:04.324357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:41.481 [2024-11-03 04:37:04.324376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:41.481 [2024-11-03 04:37:04.324397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.481 [2024-11-03 04:37:04.324410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:41.481 [2024-11-03 04:37:04.324417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:41.481 [2024-11-03 04:37:04.324423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.481 [2024-11-03 04:37:04.324430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:41.481 [2024-11-03 04:37:04.324437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:41.481 [2024-11-03 04:37:04.324444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.481 [2024-11-03 04:37:04.324450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:41.482 [2024-11-03 04:37:04.324457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:41.482 [2024-11-03 04:37:04.324463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.482 [2024-11-03 04:37:04.324470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:41.482 [2024-11-03 04:37:04.324477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:41.482 [2024-11-03 04:37:04.324484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.482 [2024-11-03 04:37:04.324490] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:41.482 [2024-11-03 04:37:04.324498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:41.482 [2024-11-03 04:37:04.324505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.482 [2024-11-03 04:37:04.324514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.482 [2024-11-03 04:37:04.324523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:41.482 [2024-11-03 04:37:04.324530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:41.482 [2024-11-03 04:37:04.324538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:41.482 [2024-11-03 04:37:04.324545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:41.482 [2024-11-03 04:37:04.324551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:41.482 [2024-11-03 04:37:04.324574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:41.482 [2024-11-03 04:37:04.324583] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:41.482 [2024-11-03 04:37:04.324592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:41.482 [2024-11-03 04:37:04.324609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:41.482 [2024-11-03 04:37:04.324616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:41.482 [2024-11-03 04:37:04.324623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:41.482 [2024-11-03 04:37:04.324630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:41.482 [2024-11-03 04:37:04.324637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:41.482 [2024-11-03 04:37:04.324644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:41.482 [2024-11-03 04:37:04.324652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:41.482 [2024-11-03 04:37:04.324659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:41.482 [2024-11-03 04:37:04.324666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:41.482 [2024-11-03 04:37:04.324702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:41.482 [2024-11-03 04:37:04.324710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:41.482 [2024-11-03 04:37:04.324727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:41.482 [2024-11-03 04:37:04.324735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:41.482 [2024-11-03 04:37:04.324742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:41.482 [2024-11-03 04:37:04.324750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.324757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:41.482 [2024-11-03 04:37:04.324787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:16:41.482 [2024-11-03 04:37:04.324795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.351052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.351174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.482 [2024-11-03 04:37:04.351228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:16:41.482 [2024-11-03 04:37:04.351251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.351380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.351456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:41.482 [2024-11-03 04:37:04.351481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:41.482 [2024-11-03 04:37:04.351502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.388514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.388672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.482 [2024-11-03 04:37:04.388732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.929 ms 00:16:41.482 [2024-11-03 04:37:04.388761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.388880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.388961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.482 [2024-11-03 04:37:04.388986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:41.482 [2024-11-03 04:37:04.389006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.389373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.389423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.482 [2024-11-03 04:37:04.389491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:41.482 [2024-11-03 04:37:04.389514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.389674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.389739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.482 [2024-11-03 04:37:04.389762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:41.482 [2024-11-03 04:37:04.389781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.403215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.403323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.482 [2024-11-03 04:37:04.403372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.379 ms 00:16:41.482 [2024-11-03 04:37:04.403393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.416366] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:41.482 [2024-11-03 04:37:04.416491] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:41.482 [2024-11-03 04:37:04.416548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.416582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:41.482 [2024-11-03 04:37:04.416603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.044 ms 00:16:41.482 [2024-11-03 04:37:04.416622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.441170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.441284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:41.482 [2024-11-03 04:37:04.441344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.472 ms 00:16:41.482 [2024-11-03 04:37:04.441367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.453435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.453540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:41.482 [2024-11-03 04:37:04.453602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.943 ms 00:16:41.482 [2024-11-03 04:37:04.453624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.465961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.466092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:41.482 [2024-11-03 04:37:04.466147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.679 ms 00:16:41.482 [2024-11-03 04:37:04.466170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.466858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.466959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:41.482 [2024-11-03 04:37:04.467013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:16:41.482 [2024-11-03 04:37:04.467035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.523380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.523540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:41.482 [2024-11-03 04:37:04.523619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.308 ms 00:16:41.482 [2024-11-03 04:37:04.523642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.533964] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:41.482 [2024-11-03 04:37:04.548195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.548318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:41.482 [2024-11-03 04:37:04.548366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.457 ms 00:16:41.482 [2024-11-03 04:37:04.548389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.482 [2024-11-03 04:37:04.548474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.482 [2024-11-03 04:37:04.548502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:41.483 [2024-11-03 04:37:04.548523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:41.483 [2024-11-03 04:37:04.548542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.483 [2024-11-03 04:37:04.548620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.483 [2024-11-03 04:37:04.548647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:41.483 [2024-11-03 04:37:04.548668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:41.483 [2024-11-03 04:37:04.548734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.483 [2024-11-03 04:37:04.548790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.483 [2024-11-03 04:37:04.548813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:41.483 [2024-11-03 04:37:04.548837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:41.483 [2024-11-03 04:37:04.548856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.483 [2024-11-03 04:37:04.548900] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:41.483 [2024-11-03 04:37:04.548923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.483 [2024-11-03 04:37:04.548943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:41.483 [2024-11-03 04:37:04.548962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:41.483 [2024-11-03 04:37:04.549021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.744 [2024-11-03 04:37:04.573007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.744 [2024-11-03 04:37:04.573128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:41.744 [2024-11-03 04:37:04.573177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.946 ms 00:16:41.744 [2024-11-03 04:37:04.573200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.744 [2024-11-03 04:37:04.573625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.744 [2024-11-03 04:37:04.573686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:41.744 [2024-11-03 04:37:04.573772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:41.744 [2024-11-03 04:37:04.573797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.744 [2024-11-03 04:37:04.574708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:41.744 [2024-11-03 04:37:04.578065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.130 ms, result 0 00:16:41.744 [2024-11-03 04:37:04.579430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:41.744 [2024-11-03 04:37:04.592722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.689  [2024-11-03T04:37:06.717Z] Copying: 16/256 [MB] (16 MBps) [2024-11-03T04:37:07.660Z] Copying: 33/256 [MB] (17 MBps) [2024-11-03T04:37:08.603Z] Copying: 52/256 [MB] (18 MBps) [2024-11-03T04:37:09.976Z] Copying: 64/256 [MB] (12 MBps) [2024-11-03T04:37:10.908Z] Copying: 94/256 [MB] (29 MBps) [2024-11-03T04:37:11.852Z] Copying: 118/256 [MB] (24 MBps) [2024-11-03T04:37:12.796Z] Copying: 141/256 [MB] (23 MBps) [2024-11-03T04:37:13.733Z] Copying: 154/256 [MB] (12 MBps) [2024-11-03T04:37:14.673Z] Copying: 170/256 [MB] (15 MBps) [2024-11-03T04:37:15.615Z] Copying: 183/256 [MB] (13 MBps) [2024-11-03T04:37:16.990Z] Copying: 193/256 [MB] (10 MBps) [2024-11-03T04:37:17.943Z] Copying: 212/256 [MB] (19 MBps) [2024-11-03T04:37:18.940Z] Copying: 232/256 [MB] (19 MBps) [2024-11-03T04:37:19.515Z] Copying: 248/256 [MB] (16 MBps) [2024-11-03T04:37:19.515Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-03 04:37:19.315394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.431 [2024-11-03 04:37:19.325948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.431 [2024-11-03 04:37:19.326172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:56.431 [2024-11-03 04:37:19.326197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.431 [2024-11-03 04:37:19.326206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.326240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:56.432 [2024-11-03 04:37:19.329331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.329534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:56.432 [2024-11-03 04:37:19.329574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:16:56.432 [2024-11-03 04:37:19.329583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.332608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.332659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:56.432 [2024-11-03 04:37:19.332672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:16:56.432 [2024-11-03 04:37:19.332681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.340699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.340751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:56.432 [2024-11-03 04:37:19.340773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.999 ms 00:16:56.432 [2024-11-03 04:37:19.340789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.347765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.347969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:56.432 [2024-11-03 04:37:19.347991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.926 ms 00:16:56.432 [2024-11-03 04:37:19.348001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.374578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.374790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:56.432 [2024-11-03 04:37:19.374814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.499 ms 00:16:56.432 [2024-11-03 04:37:19.374822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.392279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.392331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.432 [2024-11-03 04:37:19.392354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.319 ms 00:16:56.432 [2024-11-03 04:37:19.392362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.392526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.392538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.432 [2024-11-03 04:37:19.392549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:56.432 [2024-11-03 04:37:19.392588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.419252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.419300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:56.432 [2024-11-03 04:37:19.419312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:16:56.432 [2024-11-03 04:37:19.419319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.445470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.445521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:56.432 [2024-11-03 04:37:19.445533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.082 ms 00:16:56.432 [2024-11-03 04:37:19.445541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.471668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.471717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.432 [2024-11-03 04:37:19.471730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.043 ms 00:16:56.432 [2024-11-03 04:37:19.471738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.497488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.432 [2024-11-03 04:37:19.497716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.432 [2024-11-03 04:37:19.497740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.546 ms 00:16:56.432 [2024-11-03 04:37:19.497748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.432 [2024-11-03 04:37:19.497860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.432 [2024-11-03 04:37:19.497878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.497993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.432 [2024-11-03 04:37:19.498288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.433 [2024-11-03 04:37:19.498752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.433 [2024-11-03 04:37:19.498761] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:16:56.433 [2024-11-03 04:37:19.498770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.433 [2024-11-03 04:37:19.498778] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.433 [2024-11-03 04:37:19.498786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.433 [2024-11-03 04:37:19.498795] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.433 [2024-11-03 04:37:19.498804] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.433 [2024-11-03 04:37:19.498812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.433 [2024-11-03 04:37:19.498819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.433 [2024-11-03 04:37:19.498826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.433 [2024-11-03 04:37:19.498832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.433 [2024-11-03 04:37:19.498840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.433 [2024-11-03 04:37:19.498848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.433 [2024-11-03 04:37:19.498858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:16:56.433 [2024-11-03 04:37:19.498870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.433 [2024-11-03 04:37:19.512519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.433 [2024-11-03 04:37:19.512587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.433 [2024-11-03 04:37:19.512599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.627 ms 00:16:56.433 [2024-11-03 04:37:19.512607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.433 [2024-11-03 04:37:19.513048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.433 [2024-11-03 04:37:19.513070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.433 [2024-11-03 04:37:19.513080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:16:56.433 [2024-11-03 04:37:19.513090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.552587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.552641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.694 [2024-11-03 04:37:19.552653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.552662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.552801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.552815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.694 [2024-11-03 04:37:19.552826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.552835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.552889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.552900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.694 [2024-11-03 04:37:19.552909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.552917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.552935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.552944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.694 [2024-11-03 04:37:19.552953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.552966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.639703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.639934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.694 [2024-11-03 04:37:19.639956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.639965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.711265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.711533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.694 [2024-11-03 04:37:19.711584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.711595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.711684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.711695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.694 [2024-11-03 04:37:19.711704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.711714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.711747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.711758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.694 [2024-11-03 04:37:19.711767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.711775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.711902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.711916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.694 [2024-11-03 04:37:19.711926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.711935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.711972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.711982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.694 [2024-11-03 04:37:19.711990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.711999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.712045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.712057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.694 [2024-11-03 04:37:19.712065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.712074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.712122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.694 [2024-11-03 04:37:19.712135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.694 [2024-11-03 04:37:19.712144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.694 [2024-11-03 04:37:19.712154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.694 [2024-11-03 04:37:19.712315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.383 ms, result 0 00:16:57.638 00:16:57.638 00:16:57.638 04:37:20 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:57.638 04:37:20 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73933 00:16:57.638 04:37:20 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73933 00:16:57.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.638 04:37:20 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73933 ']' 00:16:57.638 04:37:20 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.638 04:37:20 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:57.638 04:37:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.638 04:37:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:57.638 04:37:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:57.638 [2024-11-03 04:37:20.622669] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:16:57.638 [2024-11-03 04:37:20.622817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73933 ] 00:16:57.900 [2024-11-03 04:37:20.784218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.900 [2024-11-03 04:37:20.904682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.841 04:37:21 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:58.841 04:37:21 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:58.841 04:37:21 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:58.841 [2024-11-03 04:37:21.786546] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.841 [2024-11-03 04:37:21.786643] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.104 [2024-11-03 04:37:21.962200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.962264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:59.104 [2024-11-03 04:37:21.962283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:59.104 [2024-11-03 04:37:21.962293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.104 [2024-11-03 04:37:21.965340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.965395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:59.104 [2024-11-03 04:37:21.965409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:16:59.104 [2024-11-03 04:37:21.965417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.104 [2024-11-03 04:37:21.965552] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:59.104 [2024-11-03 04:37:21.966676] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:59.104 [2024-11-03 04:37:21.966930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.966948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:59.104 [2024-11-03 04:37:21.966961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:16:59.104 [2024-11-03 04:37:21.966970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.104 [2024-11-03 04:37:21.968899] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:59.104 [2024-11-03 04:37:21.983519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.983604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:59.104 [2024-11-03 04:37:21.983619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.630 ms 00:16:59.104 [2024-11-03 04:37:21.983631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.104 [2024-11-03 04:37:21.983759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.983773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:59.104 [2024-11-03 04:37:21.983784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:59.104 [2024-11-03 04:37:21.983795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.104 [2024-11-03 04:37:21.992149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.992204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:59.104 [2024-11-03 04:37:21.992215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.301 ms 00:16:59.104 [2024-11-03 04:37:21.992225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.104 [2024-11-03 04:37:21.992352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.104 [2024-11-03 04:37:21.992366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:59.105 [2024-11-03 04:37:21.992376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:59.105 [2024-11-03 04:37:21.992387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.105 [2024-11-03 04:37:21.992414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.105 [2024-11-03 04:37:21.992429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:59.105 [2024-11-03 04:37:21.992437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:59.105 [2024-11-03 04:37:21.992446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.105 [2024-11-03 04:37:21.992472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:59.105 [2024-11-03 04:37:21.996627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.105 [2024-11-03 04:37:21.996668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:59.105 [2024-11-03 04:37:21.996681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.159 ms 00:16:59.105 [2024-11-03 04:37:21.996689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.105 [2024-11-03 04:37:21.996793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.105 [2024-11-03 04:37:21.996804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:59.105 [2024-11-03 04:37:21.996816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:59.105 [2024-11-03 04:37:21.996824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.105 [2024-11-03 04:37:21.996847] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:59.105 [2024-11-03 04:37:21.996872] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:59.105 [2024-11-03 04:37:21.996917] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:59.105 [2024-11-03 04:37:21.996935] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:59.105 [2024-11-03 04:37:21.997049] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:59.105 [2024-11-03 04:37:21.997062] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:59.105 [2024-11-03 04:37:21.997079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:59.105 [2024-11-03 04:37:21.997091] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:59.105 [2024-11-03 04:37:21.997123] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:59.105 [2024-11-03 04:37:21.997131] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:59.105 [2024-11-03 04:37:21.997146] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:59.105 [2024-11-03 04:37:21.997156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.105 [2024-11-03 04:37:21.997165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:59.105 [2024-11-03 04:37:21.997173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:16:59.105 [2024-11-03 04:37:21.997183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.105 [2024-11-03 04:37:21.997273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.105 [2024-11-03 04:37:21.997285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:59.105 [2024-11-03 04:37:21.997296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:59.105 [2024-11-03 04:37:21.997305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.105 [2024-11-03 04:37:21.997407] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:59.105 [2024-11-03 04:37:21.997421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:59.105 [2024-11-03 04:37:21.997430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:59.105 [2024-11-03 04:37:21.997457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:59.105 [2024-11-03 04:37:21.997488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.105 [2024-11-03 04:37:21.997505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:59.105 [2024-11-03 04:37:21.997514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:59.105 [2024-11-03 04:37:21.997522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:59.105 [2024-11-03 04:37:21.997532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:59.105 [2024-11-03 04:37:21.997540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:59.105 [2024-11-03 04:37:21.997549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:59.105 [2024-11-03 04:37:21.997592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:59.105 [2024-11-03 04:37:21.997629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:59.105 [2024-11-03 04:37:21.997677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:59.105 [2024-11-03 04:37:21.997700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:59.105 [2024-11-03 04:37:21.997725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:59.105 [2024-11-03 04:37:21.997750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.105 [2024-11-03 04:37:21.997766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:59.105 [2024-11-03 04:37:21.997776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:59.105 [2024-11-03 04:37:21.997783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:59.105 [2024-11-03 04:37:21.997792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:59.105 [2024-11-03 04:37:21.997798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:59.105 [2024-11-03 04:37:21.997810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:59.105 [2024-11-03 04:37:21.997827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:59.105 [2024-11-03 04:37:21.997842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:59.105 [2024-11-03 04:37:21.997859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:59.105 [2024-11-03 04:37:21.997869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:59.105 [2024-11-03 04:37:21.997889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:59.105 [2024-11-03 04:37:21.997898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:59.105 [2024-11-03 04:37:21.997907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:59.105 [2024-11-03 04:37:21.997914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:59.105 [2024-11-03 04:37:21.997923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:59.105 [2024-11-03 04:37:21.997931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:59.105 [2024-11-03 04:37:21.997942] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:59.105 [2024-11-03 04:37:21.997951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.105 [2024-11-03 04:37:21.997965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:59.105 [2024-11-03 04:37:21.997973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:59.106 [2024-11-03 04:37:21.997982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:59.106 [2024-11-03 04:37:21.997990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:59.106 [2024-11-03 04:37:21.997999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:59.106 [2024-11-03 04:37:21.998007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:59.106 [2024-11-03 04:37:21.998016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:59.106 [2024-11-03 04:37:21.998023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:59.106 [2024-11-03 04:37:21.998033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:59.106 [2024-11-03 04:37:21.998041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:59.106 [2024-11-03 04:37:21.998049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:59.106 [2024-11-03 04:37:21.998057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:59.106 [2024-11-03 04:37:21.998067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:59.106 [2024-11-03 04:37:21.998075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:59.106 [2024-11-03 04:37:21.998084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:59.106 [2024-11-03 04:37:21.998093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:59.106 [2024-11-03 04:37:21.998104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:59.106 [2024-11-03 04:37:21.998113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:59.106 [2024-11-03 04:37:21.998122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:59.106 [2024-11-03 04:37:21.998129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:59.106 [2024-11-03 04:37:21.998139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:21.998146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:59.106 [2024-11-03 04:37:21.998156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:16:59.106 [2024-11-03 04:37:21.998164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.030036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.030090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.106 [2024-11-03 04:37:22.030106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.809 ms 00:16:59.106 [2024-11-03 04:37:22.030114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.030252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.030266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.106 [2024-11-03 04:37:22.030277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:59.106 [2024-11-03 04:37:22.030285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.065771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.065831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.106 [2024-11-03 04:37:22.065848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.457 ms 00:16:59.106 [2024-11-03 04:37:22.065858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.065951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.065961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.106 [2024-11-03 04:37:22.065973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:59.106 [2024-11-03 04:37:22.065981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.066522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.066579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.106 [2024-11-03 04:37:22.066594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:16:59.106 [2024-11-03 04:37:22.066606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.066768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.066779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.106 [2024-11-03 04:37:22.066791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:59.106 [2024-11-03 04:37:22.066798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.085260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.085495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.106 [2024-11-03 04:37:22.085521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.435 ms 00:16:59.106 [2024-11-03 04:37:22.085530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.100283] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:59.106 [2024-11-03 04:37:22.100503] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:59.106 [2024-11-03 04:37:22.100530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.100539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:59.106 [2024-11-03 04:37:22.100553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.840 ms 00:16:59.106 [2024-11-03 04:37:22.100583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.127510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.127738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:59.106 [2024-11-03 04:37:22.127768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.828 ms 00:16:59.106 [2024-11-03 04:37:22.127777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.141407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.141459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:59.106 [2024-11-03 04:37:22.141478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.377 ms 00:16:59.106 [2024-11-03 04:37:22.141487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.154418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.154465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:59.106 [2024-11-03 04:37:22.154480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.809 ms 00:16:59.106 [2024-11-03 04:37:22.154488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.106 [2024-11-03 04:37:22.155200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.106 [2024-11-03 04:37:22.155239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.106 [2024-11-03 04:37:22.155253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:16:59.106 [2024-11-03 04:37:22.155262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.233259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.233330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:59.368 [2024-11-03 04:37:22.233351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.965 ms 00:16:59.368 [2024-11-03 04:37:22.233362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.245036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.368 [2024-11-03 04:37:22.264839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.264902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.368 [2024-11-03 04:37:22.264915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.352 ms 00:16:59.368 [2024-11-03 04:37:22.264926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.265024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.265039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:59.368 [2024-11-03 04:37:22.265048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:59.368 [2024-11-03 04:37:22.265059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.265117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.265131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.368 [2024-11-03 04:37:22.265141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:59.368 [2024-11-03 04:37:22.265152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.265180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.265194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.368 [2024-11-03 04:37:22.265204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:59.368 [2024-11-03 04:37:22.265214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.265250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:59.368 [2024-11-03 04:37:22.265266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.265276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:59.368 [2024-11-03 04:37:22.265286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:59.368 [2024-11-03 04:37:22.265297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.291538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.291599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.368 [2024-11-03 04:37:22.291618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.210 ms 00:16:59.368 [2024-11-03 04:37:22.291627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.291755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.368 [2024-11-03 04:37:22.291769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.368 [2024-11-03 04:37:22.291781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:59.368 [2024-11-03 04:37:22.291790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.368 [2024-11-03 04:37:22.292903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.368 [2024-11-03 04:37:22.296385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.332 ms, result 0 00:16:59.368 [2024-11-03 04:37:22.299173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.369 Some configs were skipped because the RPC state that can call them passed over. 00:16:59.369 04:37:22 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:59.630 [2024-11-03 04:37:22.545115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.630 [2024-11-03 04:37:22.545332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:59.630 [2024-11-03 04:37:22.545405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:16:59.630 [2024-11-03 04:37:22.545434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.630 [2024-11-03 04:37:22.545495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.415 ms, result 0 00:16:59.630 true 00:16:59.630 04:37:22 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:59.892 [2024-11-03 04:37:22.776512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.892 [2024-11-03 04:37:22.776720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:59.892 [2024-11-03 04:37:22.776816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:16:59.892 [2024-11-03 04:37:22.776842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.892 [2024-11-03 04:37:22.776906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.530 ms, result 0 00:16:59.892 true 00:16:59.892 04:37:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73933 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73933 ']' 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73933 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73933 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:59.892 killing process with pid 73933 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73933' 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73933 00:16:59.892 04:37:22 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73933 00:17:00.461 [2024-11-03 04:37:23.512879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.513079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.461 [2024-11-03 04:37:23.513133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:00.461 [2024-11-03 04:37:23.513154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.513186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:00.461 [2024-11-03 04:37:23.515270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.515365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.461 [2024-11-03 04:37:23.515421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:17:00.461 [2024-11-03 04:37:23.515438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.515676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.515938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.461 [2024-11-03 04:37:23.515968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:00.461 [2024-11-03 04:37:23.515983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.519232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.519325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:00.461 [2024-11-03 04:37:23.519373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:17:00.461 [2024-11-03 04:37:23.519392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.524668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.524771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:00.461 [2024-11-03 04:37:23.524916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.238 ms 00:17:00.461 [2024-11-03 04:37:23.524942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.532353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.532446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:00.461 [2024-11-03 04:37:23.532495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.357 ms 00:17:00.461 [2024-11-03 04:37:23.532517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.539477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.539573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:00.461 [2024-11-03 04:37:23.539615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.922 ms 00:17:00.461 [2024-11-03 04:37:23.539633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.461 [2024-11-03 04:37:23.539742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.461 [2024-11-03 04:37:23.539763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:00.461 [2024-11-03 04:37:23.539779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:00.461 [2024-11-03 04:37:23.539824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.722 [2024-11-03 04:37:23.548193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.722 [2024-11-03 04:37:23.548278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:00.722 [2024-11-03 04:37:23.548319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.341 ms 00:17:00.722 [2024-11-03 04:37:23.548335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.722 [2024-11-03 04:37:23.556353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.722 [2024-11-03 04:37:23.556437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:00.722 [2024-11-03 04:37:23.556480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.982 ms 00:17:00.722 [2024-11-03 04:37:23.556497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.722 [2024-11-03 04:37:23.564267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.722 [2024-11-03 04:37:23.564352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:00.722 [2024-11-03 04:37:23.564393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:17:00.722 [2024-11-03 04:37:23.564409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.722 [2024-11-03 04:37:23.572118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.722 [2024-11-03 04:37:23.572200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:00.722 [2024-11-03 04:37:23.572241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.645 ms 00:17:00.722 [2024-11-03 04:37:23.572257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.722 [2024-11-03 04:37:23.572290] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:00.722 [2024-11-03 04:37:23.572311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:00.722 [2024-11-03 04:37:23.572810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.572902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.572924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.572972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.572998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:00.723 [2024-11-03 04:37:23.573550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:00.724 [2024-11-03 04:37:23.573571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:00.724 [2024-11-03 04:37:23.573584] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:00.724 [2024-11-03 04:37:23.573594] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:17:00.724 [2024-11-03 04:37:23.573604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:00.724 [2024-11-03 04:37:23.573613] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:00.724 [2024-11-03 04:37:23.573626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:00.724 [2024-11-03 04:37:23.573633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:00.724 [2024-11-03 04:37:23.573638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:00.724 [2024-11-03 04:37:23.573645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:00.724 [2024-11-03 04:37:23.573651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:00.724 [2024-11-03 04:37:23.573658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:00.724 [2024-11-03 04:37:23.573662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:00.724 [2024-11-03 04:37:23.573669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.724 [2024-11-03 04:37:23.573675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:00.724 [2024-11-03 04:37:23.573684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:17:00.724 [2024-11-03 04:37:23.573689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.583476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.724 [2024-11-03 04:37:23.583571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.724 [2024-11-03 04:37:23.583586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.767 ms 00:17:00.724 [2024-11-03 04:37:23.583593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.583879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.724 [2024-11-03 04:37:23.583888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.724 [2024-11-03 04:37:23.583896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:00.724 [2024-11-03 04:37:23.583902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.619086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.619180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.724 [2024-11-03 04:37:23.619194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.619201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.619275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.619282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.724 [2024-11-03 04:37:23.619290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.619296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.619331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.619339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.724 [2024-11-03 04:37:23.619348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.619354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.619367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.619373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.724 [2024-11-03 04:37:23.619380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.619386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.679265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.679298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.724 [2024-11-03 04:37:23.679308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.679314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.724 [2024-11-03 04:37:23.728232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.724 [2024-11-03 04:37:23.728313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.724 [2024-11-03 04:37:23.728357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.724 [2024-11-03 04:37:23.728451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.724 [2024-11-03 04:37:23.728499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.724 [2024-11-03 04:37:23.728552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.724 [2024-11-03 04:37:23.728627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.724 [2024-11-03 04:37:23.728634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.724 [2024-11-03 04:37:23.728640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.724 [2024-11-03 04:37:23.728747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 215.849 ms, result 0 00:17:01.294 04:37:24 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:01.294 04:37:24 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.294 [2024-11-03 04:37:24.299706] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:01.294 [2024-11-03 04:37:24.299997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73990 ] 00:17:01.553 [2024-11-03 04:37:24.458649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.553 [2024-11-03 04:37:24.533740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.812 [2024-11-03 04:37:24.739992] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.812 [2024-11-03 04:37:24.740043] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.812 [2024-11-03 04:37:24.894459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.812 [2024-11-03 04:37:24.894499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.812 [2024-11-03 04:37:24.894510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.813 [2024-11-03 04:37:24.894516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.896645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.896796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.073 [2024-11-03 04:37:24.896809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:17:02.073 [2024-11-03 04:37:24.896815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.896901] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:02.073 [2024-11-03 04:37:24.897449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:02.073 [2024-11-03 04:37:24.897468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.897475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.073 [2024-11-03 04:37:24.897482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:17:02.073 [2024-11-03 04:37:24.897488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.898718] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:02.073 [2024-11-03 04:37:24.908514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.908658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:02.073 [2024-11-03 04:37:24.908677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.798 ms 00:17:02.073 [2024-11-03 04:37:24.908684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.908750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.908769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:02.073 [2024-11-03 04:37:24.908776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:02.073 [2024-11-03 04:37:24.908782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.913334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.913363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.073 [2024-11-03 04:37:24.913370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.519 ms 00:17:02.073 [2024-11-03 04:37:24.913376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.913450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.913457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.073 [2024-11-03 04:37:24.913463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:02.073 [2024-11-03 04:37:24.913469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.913486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.913494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:02.073 [2024-11-03 04:37:24.913502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:02.073 [2024-11-03 04:37:24.913508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.913528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:02.073 [2024-11-03 04:37:24.916071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.916187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.073 [2024-11-03 04:37:24.916200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:17:02.073 [2024-11-03 04:37:24.916206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.916235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.073 [2024-11-03 04:37:24.916242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:02.073 [2024-11-03 04:37:24.916248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:02.073 [2024-11-03 04:37:24.916254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.073 [2024-11-03 04:37:24.916266] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:02.073 [2024-11-03 04:37:24.916282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:02.074 [2024-11-03 04:37:24.916311] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:02.074 [2024-11-03 04:37:24.916323] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:02.074 [2024-11-03 04:37:24.916402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:02.074 [2024-11-03 04:37:24.916411] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:02.074 [2024-11-03 04:37:24.916418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:02.074 [2024-11-03 04:37:24.916427] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916435] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916444] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:02.074 [2024-11-03 04:37:24.916450] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:02.074 [2024-11-03 04:37:24.916459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:02.074 [2024-11-03 04:37:24.916465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:02.074 [2024-11-03 04:37:24.916472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-11-03 04:37:24.916477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:02.074 [2024-11-03 04:37:24.916484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:02.074 [2024-11-03 04:37:24.916490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-11-03 04:37:24.916574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.074 [2024-11-03 04:37:24.916581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:02.074 [2024-11-03 04:37:24.916588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:02.074 [2024-11-03 04:37:24.916596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.074 [2024-11-03 04:37:24.916671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:02.074 [2024-11-03 04:37:24.916679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:02.074 [2024-11-03 04:37:24.916686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:02.074 [2024-11-03 04:37:24.916704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:02.074 [2024-11-03 04:37:24.916722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.074 [2024-11-03 04:37:24.916733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:02.074 [2024-11-03 04:37:24.916738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:02.074 [2024-11-03 04:37:24.916743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:02.074 [2024-11-03 04:37:24.916755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:02.074 [2024-11-03 04:37:24.916769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:02.074 [2024-11-03 04:37:24.916775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:02.074 [2024-11-03 04:37:24.916786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:02.074 [2024-11-03 04:37:24.916802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:02.074 [2024-11-03 04:37:24.916817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:02.074 [2024-11-03 04:37:24.916833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:02.074 [2024-11-03 04:37:24.916848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:02.074 [2024-11-03 04:37:24.916863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.074 [2024-11-03 04:37:24.916873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:02.074 [2024-11-03 04:37:24.916878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:02.074 [2024-11-03 04:37:24.916883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:02.074 [2024-11-03 04:37:24.916888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:02.074 [2024-11-03 04:37:24.916893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:02.074 [2024-11-03 04:37:24.916898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:02.074 [2024-11-03 04:37:24.916909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:02.074 [2024-11-03 04:37:24.916913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916919] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:02.074 [2024-11-03 04:37:24.916925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:02.074 [2024-11-03 04:37:24.916931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:02.074 [2024-11-03 04:37:24.916946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:02.074 [2024-11-03 04:37:24.916951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:02.074 [2024-11-03 04:37:24.916956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:02.074 [2024-11-03 04:37:24.916961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:02.074 [2024-11-03 04:37:24.916966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:02.074 [2024-11-03 04:37:24.916972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:02.074 [2024-11-03 04:37:24.916979] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:02.074 [2024-11-03 04:37:24.916985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.074 [2024-11-03 04:37:24.916992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:02.074 [2024-11-03 04:37:24.916997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:02.074 [2024-11-03 04:37:24.917002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:02.074 [2024-11-03 04:37:24.917008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:02.074 [2024-11-03 04:37:24.917014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:02.074 [2024-11-03 04:37:24.917019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:02.074 [2024-11-03 04:37:24.917024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:02.074 [2024-11-03 04:37:24.917029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:02.074 [2024-11-03 04:37:24.917035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:02.074 [2024-11-03 04:37:24.917040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:02.074 [2024-11-03 04:37:24.917046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:02.074 [2024-11-03 04:37:24.917052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:02.074 [2024-11-03 04:37:24.917057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:02.074 [2024-11-03 04:37:24.917062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:02.074 [2024-11-03 04:37:24.917067] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:02.075 [2024-11-03 04:37:24.917073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:02.075 [2024-11-03 04:37:24.917080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:02.075 [2024-11-03 04:37:24.917086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:02.075 [2024-11-03 04:37:24.917092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:02.075 [2024-11-03 04:37:24.917097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:02.075 [2024-11-03 04:37:24.917103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.917108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:02.075 [2024-11-03 04:37:24.917115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:17:02.075 [2024-11-03 04:37:24.917123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.938010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.938037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.075 [2024-11-03 04:37:24.938044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.849 ms 00:17:02.075 [2024-11-03 04:37:24.938050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.938139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.938147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:02.075 [2024-11-03 04:37:24.938156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:02.075 [2024-11-03 04:37:24.938162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.974089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.974120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.075 [2024-11-03 04:37:24.974130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.911 ms 00:17:02.075 [2024-11-03 04:37:24.974137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.974197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.974206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.075 [2024-11-03 04:37:24.974213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:02.075 [2024-11-03 04:37:24.974219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.974501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.974514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.075 [2024-11-03 04:37:24.974522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:02.075 [2024-11-03 04:37:24.974528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.974650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.974663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.075 [2024-11-03 04:37:24.974670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:02.075 [2024-11-03 04:37:24.974676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.985455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.985576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.075 [2024-11-03 04:37:24.985589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.762 ms 00:17:02.075 [2024-11-03 04:37:24.985596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:24.995562] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:02.075 [2024-11-03 04:37:24.995590] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:02.075 [2024-11-03 04:37:24.995600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:24.995607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:02.075 [2024-11-03 04:37:24.995614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.924 ms 00:17:02.075 [2024-11-03 04:37:24.995620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.014328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.014362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:02.075 [2024-11-03 04:37:25.014371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.652 ms 00:17:02.075 [2024-11-03 04:37:25.014377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.023458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.023484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:02.075 [2024-11-03 04:37:25.023492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.025 ms 00:17:02.075 [2024-11-03 04:37:25.023497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.032414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.032439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:02.075 [2024-11-03 04:37:25.032447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.873 ms 00:17:02.075 [2024-11-03 04:37:25.032452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.032936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.032953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:02.075 [2024-11-03 04:37:25.032960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:02.075 [2024-11-03 04:37:25.032966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.077820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.077857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:02.075 [2024-11-03 04:37:25.077867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.836 ms 00:17:02.075 [2024-11-03 04:37:25.077874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.085551] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.075 [2024-11-03 04:37:25.097124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.097267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.075 [2024-11-03 04:37:25.097281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.187 ms 00:17:02.075 [2024-11-03 04:37:25.097288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.097360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.097370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.075 [2024-11-03 04:37:25.097377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:02.075 [2024-11-03 04:37:25.097383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.097417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.097425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.075 [2024-11-03 04:37:25.097431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:02.075 [2024-11-03 04:37:25.097437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.097459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.097466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.075 [2024-11-03 04:37:25.097474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:02.075 [2024-11-03 04:37:25.097480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.097503] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.075 [2024-11-03 04:37:25.097510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.097516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.075 [2024-11-03 04:37:25.097522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:02.075 [2024-11-03 04:37:25.097528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.116025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.116057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.075 [2024-11-03 04:37:25.116066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.482 ms 00:17:02.075 [2024-11-03 04:37:25.116073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.116146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.075 [2024-11-03 04:37:25.116155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.075 [2024-11-03 04:37:25.116161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:02.075 [2024-11-03 04:37:25.116168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.075 [2024-11-03 04:37:25.116808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.075 [2024-11-03 04:37:25.119098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.092 ms, result 0 00:17:02.075 [2024-11-03 04:37:25.119820] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.076 [2024-11-03 04:37:25.134625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.448  [2024-11-03T04:37:27.484Z] Copying: 24/256 [MB] (24 MBps) [2024-11-03T04:37:28.430Z] Copying: 41/256 [MB] (16 MBps) [2024-11-03T04:37:29.368Z] Copying: 51/256 [MB] (10 MBps) [2024-11-03T04:37:30.310Z] Copying: 70/256 [MB] (19 MBps) [2024-11-03T04:37:31.256Z] Copying: 82/256 [MB] (11 MBps) [2024-11-03T04:37:32.269Z] Copying: 100/256 [MB] (17 MBps) [2024-11-03T04:37:33.211Z] Copying: 111/256 [MB] (10 MBps) [2024-11-03T04:37:34.197Z] Copying: 124/256 [MB] (12 MBps) [2024-11-03T04:37:35.573Z] Copying: 140/256 [MB] (16 MBps) [2024-11-03T04:37:36.144Z] Copying: 160/256 [MB] (20 MBps) [2024-11-03T04:37:37.532Z] Copying: 190/256 [MB] (29 MBps) [2024-11-03T04:37:38.473Z] Copying: 207/256 [MB] (17 MBps) [2024-11-03T04:37:39.415Z] Copying: 228/256 [MB] (21 MBps) [2024-11-03T04:37:39.990Z] Copying: 249/256 [MB] (21 MBps) [2024-11-03T04:37:39.990Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-03 04:37:39.736883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.906 [2024-11-03 04:37:39.747523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.747615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.906 [2024-11-03 04:37:39.747631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:16.906 [2024-11-03 04:37:39.747641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.747667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:16.906 [2024-11-03 04:37:39.750730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.750784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.906 [2024-11-03 04:37:39.750796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:17:16.906 [2024-11-03 04:37:39.750806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.751074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.751086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.906 [2024-11-03 04:37:39.751095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:16.906 [2024-11-03 04:37:39.751103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.754990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.755103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.906 [2024-11-03 04:37:39.755175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:17:16.906 [2024-11-03 04:37:39.755199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.762129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.762293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:16.906 [2024-11-03 04:37:39.762360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.887 ms 00:17:16.906 [2024-11-03 04:37:39.762383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.788520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.788736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:16.906 [2024-11-03 04:37:39.788829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.052 ms 00:17:16.906 [2024-11-03 04:37:39.788854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.805413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.805625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:16.906 [2024-11-03 04:37:39.805656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.424 ms 00:17:16.906 [2024-11-03 04:37:39.805665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.806183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.806233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:16.906 [2024-11-03 04:37:39.806247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:16.906 [2024-11-03 04:37:39.806256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.832979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.833029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:16.906 [2024-11-03 04:37:39.833043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.690 ms 00:17:16.906 [2024-11-03 04:37:39.833052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.858422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.858469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:16.906 [2024-11-03 04:37:39.858481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.304 ms 00:17:16.906 [2024-11-03 04:37:39.858488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.883401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.883448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:16.906 [2024-11-03 04:37:39.883461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.862 ms 00:17:16.906 [2024-11-03 04:37:39.883468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.908508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.906 [2024-11-03 04:37:39.908579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:16.906 [2024-11-03 04:37:39.908591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.956 ms 00:17:16.906 [2024-11-03 04:37:39.908598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.906 [2024-11-03 04:37:39.908685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:16.906 [2024-11-03 04:37:39.908706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.908992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:16.906 [2024-11-03 04:37:39.909306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:16.907 [2024-11-03 04:37:39.909525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:16.907 [2024-11-03 04:37:39.909533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:17:16.907 [2024-11-03 04:37:39.909542] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:16.907 [2024-11-03 04:37:39.909549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:16.907 [2024-11-03 04:37:39.909571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:16.907 [2024-11-03 04:37:39.909580] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:16.907 [2024-11-03 04:37:39.909587] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:16.907 [2024-11-03 04:37:39.909596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:16.907 [2024-11-03 04:37:39.909603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:16.907 [2024-11-03 04:37:39.909611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:16.907 [2024-11-03 04:37:39.909618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:16.907 [2024-11-03 04:37:39.909626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.907 [2024-11-03 04:37:39.909634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:16.907 [2024-11-03 04:37:39.909644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:17:16.907 [2024-11-03 04:37:39.909655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.907 [2024-11-03 04:37:39.923176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.907 [2024-11-03 04:37:39.923363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:16.907 [2024-11-03 04:37:39.923382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.500 ms 00:17:16.907 [2024-11-03 04:37:39.923392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.907 [2024-11-03 04:37:39.923828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.907 [2024-11-03 04:37:39.923855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:16.907 [2024-11-03 04:37:39.923865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:17:16.907 [2024-11-03 04:37:39.923873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.907 [2024-11-03 04:37:39.963304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.907 [2024-11-03 04:37:39.963355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.907 [2024-11-03 04:37:39.963367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.907 [2024-11-03 04:37:39.963376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.907 [2024-11-03 04:37:39.963469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.907 [2024-11-03 04:37:39.963482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.907 [2024-11-03 04:37:39.963491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.907 [2024-11-03 04:37:39.963498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.907 [2024-11-03 04:37:39.963555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.907 [2024-11-03 04:37:39.963589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.907 [2024-11-03 04:37:39.963598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.907 [2024-11-03 04:37:39.963606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.907 [2024-11-03 04:37:39.963626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.907 [2024-11-03 04:37:39.963634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.907 [2024-11-03 04:37:39.963645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.907 [2024-11-03 04:37:39.963654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.168 [2024-11-03 04:37:40.048081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.168 [2024-11-03 04:37:40.048143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.168 [2024-11-03 04:37:40.048156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.168 [2024-11-03 04:37:40.048164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.117691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.117941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.169 [2024-11-03 04:37:40.117968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.117978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.118052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.169 [2024-11-03 04:37:40.118061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.118070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.118112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.169 [2024-11-03 04:37:40.118120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.118129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.118258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.169 [2024-11-03 04:37:40.118267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.118275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.118319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:17.169 [2024-11-03 04:37:40.118327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.118335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.118413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.169 [2024-11-03 04:37:40.118421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.118430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.169 [2024-11-03 04:37:40.118492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.169 [2024-11-03 04:37:40.118501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.169 [2024-11-03 04:37:40.118510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-11-03 04:37:40.118710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.174 ms, result 0 00:17:18.113 00:17:18.113 00:17:18.113 04:37:40 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:18.113 04:37:40 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:18.375 04:37:41 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.636 [2024-11-03 04:37:41.534479] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:18.636 [2024-11-03 04:37:41.534883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74166 ] 00:17:18.636 [2024-11-03 04:37:41.699428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.898 [2024-11-03 04:37:41.823315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.160 [2024-11-03 04:37:42.117016] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.160 [2024-11-03 04:37:42.117106] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.424 [2024-11-03 04:37:42.279951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.280022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.424 [2024-11-03 04:37:42.280040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:19.424 [2024-11-03 04:37:42.280049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.283081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.283291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.424 [2024-11-03 04:37:42.283313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:17:19.424 [2024-11-03 04:37:42.283322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.283814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.424 [2024-11-03 04:37:42.284743] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.424 [2024-11-03 04:37:42.284809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.284819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.424 [2024-11-03 04:37:42.284830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:17:19.424 [2024-11-03 04:37:42.284838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.286640] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:19.424 [2024-11-03 04:37:42.301073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.301127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:19.424 [2024-11-03 04:37:42.301147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.434 ms 00:17:19.424 [2024-11-03 04:37:42.301156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.301282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.301296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:19.424 [2024-11-03 04:37:42.301306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:19.424 [2024-11-03 04:37:42.301314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.309851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.309903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.424 [2024-11-03 04:37:42.309914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.490 ms 00:17:19.424 [2024-11-03 04:37:42.309922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.310029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.310040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.424 [2024-11-03 04:37:42.310049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:19.424 [2024-11-03 04:37:42.310058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.424 [2024-11-03 04:37:42.310085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.424 [2024-11-03 04:37:42.310093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.425 [2024-11-03 04:37:42.310104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:19.425 [2024-11-03 04:37:42.310112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.425 [2024-11-03 04:37:42.310134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:19.425 [2024-11-03 04:37:42.314325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.425 [2024-11-03 04:37:42.314368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.425 [2024-11-03 04:37:42.314380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 00:17:19.425 [2024-11-03 04:37:42.314388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.425 [2024-11-03 04:37:42.314465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.425 [2024-11-03 04:37:42.314475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.425 [2024-11-03 04:37:42.314485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:19.425 [2024-11-03 04:37:42.314493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.425 [2024-11-03 04:37:42.314514] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.425 [2024-11-03 04:37:42.314538] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:19.425 [2024-11-03 04:37:42.314599] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.425 [2024-11-03 04:37:42.314616] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:19.425 [2024-11-03 04:37:42.314723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.425 [2024-11-03 04:37:42.314736] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.425 [2024-11-03 04:37:42.314747] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:19.425 [2024-11-03 04:37:42.314757] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.425 [2024-11-03 04:37:42.314767] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.425 [2024-11-03 04:37:42.314779] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:19.425 [2024-11-03 04:37:42.314788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.425 [2024-11-03 04:37:42.314796] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.425 [2024-11-03 04:37:42.314803] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.425 [2024-11-03 04:37:42.314812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.425 [2024-11-03 04:37:42.314820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.425 [2024-11-03 04:37:42.314828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:19.425 [2024-11-03 04:37:42.314836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.425 [2024-11-03 04:37:42.314923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.425 [2024-11-03 04:37:42.314932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.425 [2024-11-03 04:37:42.314940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:19.425 [2024-11-03 04:37:42.314950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.425 [2024-11-03 04:37:42.315049] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.425 [2024-11-03 04:37:42.315060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.425 [2024-11-03 04:37:42.315069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.425 [2024-11-03 04:37:42.315093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.425 [2024-11-03 04:37:42.315115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.425 [2024-11-03 04:37:42.315128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.425 [2024-11-03 04:37:42.315136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:19.425 [2024-11-03 04:37:42.315142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.425 [2024-11-03 04:37:42.315157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.425 [2024-11-03 04:37:42.315165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:19.425 [2024-11-03 04:37:42.315174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.425 [2024-11-03 04:37:42.315188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.425 [2024-11-03 04:37:42.315208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.425 [2024-11-03 04:37:42.315228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.425 [2024-11-03 04:37:42.315250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.425 [2024-11-03 04:37:42.315270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.425 [2024-11-03 04:37:42.315292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.425 [2024-11-03 04:37:42.315305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.425 [2024-11-03 04:37:42.315313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:19.425 [2024-11-03 04:37:42.315320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.425 [2024-11-03 04:37:42.315327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.425 [2024-11-03 04:37:42.315334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:19.425 [2024-11-03 04:37:42.315341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.425 [2024-11-03 04:37:42.315356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:19.425 [2024-11-03 04:37:42.315363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315371] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.425 [2024-11-03 04:37:42.315379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.425 [2024-11-03 04:37:42.315386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.425 [2024-11-03 04:37:42.315407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.425 [2024-11-03 04:37:42.315415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.425 [2024-11-03 04:37:42.315422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.425 [2024-11-03 04:37:42.315429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.425 [2024-11-03 04:37:42.315435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.425 [2024-11-03 04:37:42.315442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.425 [2024-11-03 04:37:42.315451] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.425 [2024-11-03 04:37:42.315460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.425 [2024-11-03 04:37:42.315468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:19.425 [2024-11-03 04:37:42.315476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:19.425 [2024-11-03 04:37:42.315483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:19.425 [2024-11-03 04:37:42.315490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:19.425 [2024-11-03 04:37:42.315497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:19.426 [2024-11-03 04:37:42.315504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:19.426 [2024-11-03 04:37:42.315511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:19.426 [2024-11-03 04:37:42.315518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:19.426 [2024-11-03 04:37:42.315525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:19.426 [2024-11-03 04:37:42.315532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:19.426 [2024-11-03 04:37:42.315539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:19.426 [2024-11-03 04:37:42.315545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:19.426 [2024-11-03 04:37:42.315552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:19.426 [2024-11-03 04:37:42.315573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:19.426 [2024-11-03 04:37:42.315581] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.426 [2024-11-03 04:37:42.315591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.426 [2024-11-03 04:37:42.315599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.426 [2024-11-03 04:37:42.315606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.426 [2024-11-03 04:37:42.315614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.426 [2024-11-03 04:37:42.315628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.426 [2024-11-03 04:37:42.315636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.315645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.426 [2024-11-03 04:37:42.315652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:17:19.426 [2024-11-03 04:37:42.315664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.348076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.348126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.426 [2024-11-03 04:37:42.348138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.356 ms 00:17:19.426 [2024-11-03 04:37:42.348147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.348285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.348297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.426 [2024-11-03 04:37:42.348311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:19.426 [2024-11-03 04:37:42.348320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.397507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.397586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.426 [2024-11-03 04:37:42.397600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.165 ms 00:17:19.426 [2024-11-03 04:37:42.397610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.397734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.397747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.426 [2024-11-03 04:37:42.397757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.426 [2024-11-03 04:37:42.397767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.398337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.398386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.426 [2024-11-03 04:37:42.398398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:19.426 [2024-11-03 04:37:42.398406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.398591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.398602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.426 [2024-11-03 04:37:42.398611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:19.426 [2024-11-03 04:37:42.398618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.415060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.415107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.426 [2024-11-03 04:37:42.415118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.419 ms 00:17:19.426 [2024-11-03 04:37:42.415127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.429711] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:19.426 [2024-11-03 04:37:42.429762] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.426 [2024-11-03 04:37:42.429777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.429786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.426 [2024-11-03 04:37:42.429796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.528 ms 00:17:19.426 [2024-11-03 04:37:42.429804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.455949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.456027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.426 [2024-11-03 04:37:42.456042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:17:19.426 [2024-11-03 04:37:42.456050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.469100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.469148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.426 [2024-11-03 04:37:42.469160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.945 ms 00:17:19.426 [2024-11-03 04:37:42.469168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.481931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.481979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.426 [2024-11-03 04:37:42.481991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.669 ms 00:17:19.426 [2024-11-03 04:37:42.481998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.426 [2024-11-03 04:37:42.482676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.426 [2024-11-03 04:37:42.482702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.426 [2024-11-03 04:37:42.482713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:17:19.426 [2024-11-03 04:37:42.482721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.548372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.548444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.689 [2024-11-03 04:37:42.548459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.622 ms 00:17:19.689 [2024-11-03 04:37:42.548469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.559840] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.689 [2024-11-03 04:37:42.579335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.579584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.689 [2024-11-03 04:37:42.579607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.730 ms 00:17:19.689 [2024-11-03 04:37:42.579617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.579722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.579738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.689 [2024-11-03 04:37:42.579750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:19.689 [2024-11-03 04:37:42.579758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.579820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.579829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.689 [2024-11-03 04:37:42.579838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:19.689 [2024-11-03 04:37:42.579847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.579875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.579886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.689 [2024-11-03 04:37:42.579897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:19.689 [2024-11-03 04:37:42.579905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.579947] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.689 [2024-11-03 04:37:42.579957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.579966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.689 [2024-11-03 04:37:42.579974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:19.689 [2024-11-03 04:37:42.579983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.606818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.607009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.689 [2024-11-03 04:37:42.607031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.811 ms 00:17:19.689 [2024-11-03 04:37:42.607041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.607175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.689 [2024-11-03 04:37:42.607188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.689 [2024-11-03 04:37:42.607198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:19.689 [2024-11-03 04:37:42.607207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.689 [2024-11-03 04:37:42.608443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.689 [2024-11-03 04:37:42.612023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.153 ms, result 0 00:17:19.689 [2024-11-03 04:37:42.613416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.689 [2024-11-03 04:37:42.627023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.951  [2024-11-03T04:37:43.035Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-03 04:37:42.882491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.951 [2024-11-03 04:37:42.891790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.891844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.951 [2024-11-03 04:37:42.891857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.951 [2024-11-03 04:37:42.891865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.891889] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.951 [2024-11-03 04:37:42.894962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.895014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.951 [2024-11-03 04:37:42.895027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:17:19.951 [2024-11-03 04:37:42.895035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.897866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.897916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.951 [2024-11-03 04:37:42.897927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:17:19.951 [2024-11-03 04:37:42.897935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.902298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.902335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.951 [2024-11-03 04:37:42.902354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:17:19.951 [2024-11-03 04:37:42.902362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.909333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.909570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.951 [2024-11-03 04:37:42.909593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.937 ms 00:17:19.951 [2024-11-03 04:37:42.909602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.936097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.936151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.951 [2024-11-03 04:37:42.936163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.436 ms 00:17:19.951 [2024-11-03 04:37:42.936171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.953237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.953292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.951 [2024-11-03 04:37:42.953315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.997 ms 00:17:19.951 [2024-11-03 04:37:42.953327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.953485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.953497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.951 [2024-11-03 04:37:42.953506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:19.951 [2024-11-03 04:37:42.953514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.951 [2024-11-03 04:37:42.980456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.951 [2024-11-03 04:37:42.980585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:19.951 [2024-11-03 04:37:42.980603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.915 ms 00:17:19.951 [2024-11-03 04:37:42.980611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.952 [2024-11-03 04:37:43.006613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.952 [2024-11-03 04:37:43.006664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:19.952 [2024-11-03 04:37:43.006676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.912 ms 00:17:19.952 [2024-11-03 04:37:43.006683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.952 [2024-11-03 04:37:43.032397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.952 [2024-11-03 04:37:43.032447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.952 [2024-11-03 04:37:43.032460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.619 ms 00:17:19.952 [2024-11-03 04:37:43.032467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.215 [2024-11-03 04:37:43.058179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.215 [2024-11-03 04:37:43.058231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.215 [2024-11-03 04:37:43.058242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.596 ms 00:17:20.215 [2024-11-03 04:37:43.058250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.215 [2024-11-03 04:37:43.058315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.215 [2024-11-03 04:37:43.058339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.215 [2024-11-03 04:37:43.058871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.058998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.216 [2024-11-03 04:37:43.059137] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.216 [2024-11-03 04:37:43.059146] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:17:20.216 [2024-11-03 04:37:43.059154] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.216 [2024-11-03 04:37:43.059161] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.216 [2024-11-03 04:37:43.059169] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.216 [2024-11-03 04:37:43.059177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.216 [2024-11-03 04:37:43.059185] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.216 [2024-11-03 04:37:43.059192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.216 [2024-11-03 04:37:43.059200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.216 [2024-11-03 04:37:43.059206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.216 [2024-11-03 04:37:43.059213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.216 [2024-11-03 04:37:43.059220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.216 [2024-11-03 04:37:43.059228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.216 [2024-11-03 04:37:43.059240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:17:20.216 [2024-11-03 04:37:43.059247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.072915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.216 [2024-11-03 04:37:43.072962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.216 [2024-11-03 04:37:43.072974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.646 ms 00:17:20.216 [2024-11-03 04:37:43.072982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.073384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.216 [2024-11-03 04:37:43.073410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.216 [2024-11-03 04:37:43.073420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:17:20.216 [2024-11-03 04:37:43.073428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.112638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.112690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.216 [2024-11-03 04:37:43.112702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.112709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.112826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.112844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.216 [2024-11-03 04:37:43.112852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.112860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.112914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.112924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.216 [2024-11-03 04:37:43.112932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.112940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.112958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.112966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.216 [2024-11-03 04:37:43.112977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.112984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.197413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.197475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.216 [2024-11-03 04:37:43.197489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.197498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.268313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.268374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.216 [2024-11-03 04:37:43.268393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.268403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.268482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.268492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.216 [2024-11-03 04:37:43.268501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.268510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.268542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.268552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.216 [2024-11-03 04:37:43.268589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.268601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.268708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.268719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.216 [2024-11-03 04:37:43.268730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.268740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.216 [2024-11-03 04:37:43.268791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.216 [2024-11-03 04:37:43.268801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.216 [2024-11-03 04:37:43.268810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.216 [2024-11-03 04:37:43.268819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.217 [2024-11-03 04:37:43.268865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.217 [2024-11-03 04:37:43.268875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.217 [2024-11-03 04:37:43.268883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.217 [2024-11-03 04:37:43.268892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.217 [2024-11-03 04:37:43.268942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.217 [2024-11-03 04:37:43.268952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.217 [2024-11-03 04:37:43.268961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.217 [2024-11-03 04:37:43.268972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.217 [2024-11-03 04:37:43.269131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.329 ms, result 0 00:17:21.163 00:17:21.163 00:17:21.163 04:37:44 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:21.163 04:37:44 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74197 00:17:21.163 04:37:44 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74197 00:17:21.163 04:37:44 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74197 ']' 00:17:21.163 04:37:44 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.163 04:37:44 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:21.163 04:37:44 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.163 04:37:44 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:21.163 04:37:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:21.163 [2024-11-03 04:37:44.114469] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:21.163 [2024-11-03 04:37:44.114643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74197 ] 00:17:21.425 [2024-11-03 04:37:44.278300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.425 [2024-11-03 04:37:44.400190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.369 04:37:45 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:22.369 04:37:45 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:22.369 04:37:45 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:22.369 [2024-11-03 04:37:45.330538] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.369 [2024-11-03 04:37:45.330631] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.632 [2024-11-03 04:37:45.510201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.510267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.632 [2024-11-03 04:37:45.510286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.632 [2024-11-03 04:37:45.510295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.513353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.513633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.632 [2024-11-03 04:37:45.513662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.036 ms 00:17:22.632 [2024-11-03 04:37:45.513671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.513942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.632 [2024-11-03 04:37:45.514714] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.632 [2024-11-03 04:37:45.514754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.514763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.632 [2024-11-03 04:37:45.514775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:17:22.632 [2024-11-03 04:37:45.514783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.516623] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.632 [2024-11-03 04:37:45.532032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.532254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.632 [2024-11-03 04:37:45.532278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.418 ms 00:17:22.632 [2024-11-03 04:37:45.532290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.532462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.532481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.632 [2024-11-03 04:37:45.532492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:22.632 [2024-11-03 04:37:45.532503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.541178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.541233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.632 [2024-11-03 04:37:45.541244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.620 ms 00:17:22.632 [2024-11-03 04:37:45.541254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.541377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.541390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.632 [2024-11-03 04:37:45.541399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:22.632 [2024-11-03 04:37:45.541414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.541441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.541454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.632 [2024-11-03 04:37:45.541462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:22.632 [2024-11-03 04:37:45.541472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.632 [2024-11-03 04:37:45.541498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.632 [2024-11-03 04:37:45.545660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.632 [2024-11-03 04:37:45.545701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.632 [2024-11-03 04:37:45.545715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.167 ms 00:17:22.633 [2024-11-03 04:37:45.545723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.633 [2024-11-03 04:37:45.545807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.633 [2024-11-03 04:37:45.545817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.633 [2024-11-03 04:37:45.545830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:22.633 [2024-11-03 04:37:45.545837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.633 [2024-11-03 04:37:45.545861] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.633 [2024-11-03 04:37:45.545884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:22.633 [2024-11-03 04:37:45.545930] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.633 [2024-11-03 04:37:45.545946] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:22.633 [2024-11-03 04:37:45.546060] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.633 [2024-11-03 04:37:45.546072] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.633 [2024-11-03 04:37:45.546087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.633 [2024-11-03 04:37:45.546098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546121] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.633 [2024-11-03 04:37:45.546131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.633 [2024-11-03 04:37:45.546139] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.633 [2024-11-03 04:37:45.546152] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.633 [2024-11-03 04:37:45.546159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.633 [2024-11-03 04:37:45.546169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.633 [2024-11-03 04:37:45.546178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:22.633 [2024-11-03 04:37:45.546187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.633 [2024-11-03 04:37:45.546274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.633 [2024-11-03 04:37:45.546285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.633 [2024-11-03 04:37:45.546295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:22.633 [2024-11-03 04:37:45.546303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.633 [2024-11-03 04:37:45.546403] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.633 [2024-11-03 04:37:45.546415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.633 [2024-11-03 04:37:45.546424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.633 [2024-11-03 04:37:45.546451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.633 [2024-11-03 04:37:45.546478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.633 [2024-11-03 04:37:45.546494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.633 [2024-11-03 04:37:45.546502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.633 [2024-11-03 04:37:45.546509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.633 [2024-11-03 04:37:45.546518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.633 [2024-11-03 04:37:45.546525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.633 [2024-11-03 04:37:45.546537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.633 [2024-11-03 04:37:45.546553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.633 [2024-11-03 04:37:45.546610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.633 [2024-11-03 04:37:45.546637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.633 [2024-11-03 04:37:45.546660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.633 [2024-11-03 04:37:45.546702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.633 [2024-11-03 04:37:45.546752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.633 [2024-11-03 04:37:45.546768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.633 [2024-11-03 04:37:45.546777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.633 [2024-11-03 04:37:45.546783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.633 [2024-11-03 04:37:45.546792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.633 [2024-11-03 04:37:45.546800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.633 [2024-11-03 04:37:45.546810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.633 [2024-11-03 04:37:45.546827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.633 [2024-11-03 04:37:45.546835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546844] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.633 [2024-11-03 04:37:45.546852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.633 [2024-11-03 04:37:45.546862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.633 [2024-11-03 04:37:45.546883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.633 [2024-11-03 04:37:45.546891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.633 [2024-11-03 04:37:45.546900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.633 [2024-11-03 04:37:45.546907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.633 [2024-11-03 04:37:45.546915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.633 [2024-11-03 04:37:45.546922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.633 [2024-11-03 04:37:45.546933] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.633 [2024-11-03 04:37:45.546943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.633 [2024-11-03 04:37:45.546958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.633 [2024-11-03 04:37:45.546972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.633 [2024-11-03 04:37:45.546982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.633 [2024-11-03 04:37:45.546990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.633 [2024-11-03 04:37:45.546999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.633 [2024-11-03 04:37:45.547006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.633 [2024-11-03 04:37:45.547015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.633 [2024-11-03 04:37:45.547022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.633 [2024-11-03 04:37:45.547030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.634 [2024-11-03 04:37:45.547037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.634 [2024-11-03 04:37:45.547047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.634 [2024-11-03 04:37:45.547054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.634 [2024-11-03 04:37:45.547063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.634 [2024-11-03 04:37:45.547071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.634 [2024-11-03 04:37:45.547079] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.634 [2024-11-03 04:37:45.547088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.634 [2024-11-03 04:37:45.547102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.634 [2024-11-03 04:37:45.547109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.634 [2024-11-03 04:37:45.547118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.634 [2024-11-03 04:37:45.547126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.634 [2024-11-03 04:37:45.547136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.547144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.634 [2024-11-03 04:37:45.547154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:17:22.634 [2024-11-03 04:37:45.547161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.580074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.580128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.634 [2024-11-03 04:37:45.580144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.848 ms 00:17:22.634 [2024-11-03 04:37:45.580152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.580294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.580307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.634 [2024-11-03 04:37:45.580318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:22.634 [2024-11-03 04:37:45.580327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.616107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.616154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.634 [2024-11-03 04:37:45.616169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.754 ms 00:17:22.634 [2024-11-03 04:37:45.616180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.616274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.616284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.634 [2024-11-03 04:37:45.616295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.634 [2024-11-03 04:37:45.616304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.616920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.616955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.634 [2024-11-03 04:37:45.616968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:17:22.634 [2024-11-03 04:37:45.616978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.617132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.617143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.634 [2024-11-03 04:37:45.617154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:22.634 [2024-11-03 04:37:45.617162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.635616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.635666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.634 [2024-11-03 04:37:45.635681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.428 ms 00:17:22.634 [2024-11-03 04:37:45.635689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.650253] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:22.634 [2024-11-03 04:37:45.650463] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.634 [2024-11-03 04:37:45.650490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.650498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.634 [2024-11-03 04:37:45.650511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.677 ms 00:17:22.634 [2024-11-03 04:37:45.650519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.677063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.677275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.634 [2024-11-03 04:37:45.677306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.425 ms 00:17:22.634 [2024-11-03 04:37:45.677316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.690711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.690761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.634 [2024-11-03 04:37:45.690779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.188 ms 00:17:22.634 [2024-11-03 04:37:45.690786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.703777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.703962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.634 [2024-11-03 04:37:45.703991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.888 ms 00:17:22.634 [2024-11-03 04:37:45.703999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.634 [2024-11-03 04:37:45.704695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.634 [2024-11-03 04:37:45.704723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.634 [2024-11-03 04:37:45.704737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:17:22.634 [2024-11-03 04:37:45.704745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.790775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.790854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.002 [2024-11-03 04:37:45.790875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.976 ms 00:17:23.002 [2024-11-03 04:37:45.790884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.802277] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.002 [2024-11-03 04:37:45.823098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.823172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.002 [2024-11-03 04:37:45.823188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.095 ms 00:17:23.002 [2024-11-03 04:37:45.823200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.823322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.823336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.002 [2024-11-03 04:37:45.823346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:23.002 [2024-11-03 04:37:45.823357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.823419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.823431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.002 [2024-11-03 04:37:45.823440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:23.002 [2024-11-03 04:37:45.823452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.823482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.823493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.002 [2024-11-03 04:37:45.823501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:23.002 [2024-11-03 04:37:45.823514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.823552] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.002 [2024-11-03 04:37:45.823603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.823611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.002 [2024-11-03 04:37:45.823622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:23.002 [2024-11-03 04:37:45.823633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.851099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.851323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.002 [2024-11-03 04:37:45.851354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.431 ms 00:17:23.002 [2024-11-03 04:37:45.851363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.851502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.002 [2024-11-03 04:37:45.851514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.002 [2024-11-03 04:37:45.851526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:23.002 [2024-11-03 04:37:45.851534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.002 [2024-11-03 04:37:45.852698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.002 [2024-11-03 04:37:45.856320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 342.126 ms, result 0 00:17:23.002 [2024-11-03 04:37:45.858550] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.002 Some configs were skipped because the RPC state that can call them passed over. 00:17:23.002 04:37:45 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:23.265 [2024-11-03 04:37:46.099581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.265 [2024-11-03 04:37:46.099800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:23.265 [2024-11-03 04:37:46.099870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:17:23.265 [2024-11-03 04:37:46.099899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.265 [2024-11-03 04:37:46.099961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.432 ms, result 0 00:17:23.265 true 00:17:23.265 04:37:46 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:23.265 [2024-11-03 04:37:46.315673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.265 [2024-11-03 04:37:46.315873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:23.265 [2024-11-03 04:37:46.315901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:17:23.265 [2024-11-03 04:37:46.315910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.265 [2024-11-03 04:37:46.315961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.106 ms, result 0 00:17:23.265 true 00:17:23.265 04:37:46 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74197 00:17:23.265 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74197 ']' 00:17:23.265 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74197 00:17:23.265 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:23.265 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.265 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74197 00:17:23.526 killing process with pid 74197 00:17:23.527 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:23.527 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:23.527 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74197' 00:17:23.527 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74197 00:17:23.527 04:37:46 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74197 00:17:24.099 [2024-11-03 04:37:47.163109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.099 [2024-11-03 04:37:47.163209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.099 [2024-11-03 04:37:47.163227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:24.099 [2024-11-03 04:37:47.163240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.099 [2024-11-03 04:37:47.163293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.099 [2024-11-03 04:37:47.166844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.099 [2024-11-03 04:37:47.166899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.099 [2024-11-03 04:37:47.166921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.522 ms 00:17:24.099 [2024-11-03 04:37:47.166930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.099 [2024-11-03 04:37:47.167241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.099 [2024-11-03 04:37:47.167255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.099 [2024-11-03 04:37:47.167267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:24.099 [2024-11-03 04:37:47.167276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.099 [2024-11-03 04:37:47.172053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.099 [2024-11-03 04:37:47.172102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.099 [2024-11-03 04:37:47.172116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.730 ms 00:17:24.099 [2024-11-03 04:37:47.172128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.099 [2024-11-03 04:37:47.179644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.100 [2024-11-03 04:37:47.179695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:24.100 [2024-11-03 04:37:47.179716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.460 ms 00:17:24.100 [2024-11-03 04:37:47.179726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.191300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.191352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.363 [2024-11-03 04:37:47.191371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.495 ms 00:17:24.363 [2024-11-03 04:37:47.191388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.202089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.202394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.363 [2024-11-03 04:37:47.202424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.632 ms 00:17:24.363 [2024-11-03 04:37:47.202436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.202755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.202788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.363 [2024-11-03 04:37:47.202802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:17:24.363 [2024-11-03 04:37:47.202812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.214890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.214941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:24.363 [2024-11-03 04:37:47.214957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.049 ms 00:17:24.363 [2024-11-03 04:37:47.214966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.226373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.226421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:24.363 [2024-11-03 04:37:47.226439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.327 ms 00:17:24.363 [2024-11-03 04:37:47.226447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.237072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.237271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.363 [2024-11-03 04:37:47.237300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.543 ms 00:17:24.363 [2024-11-03 04:37:47.237307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.247980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.363 [2024-11-03 04:37:47.248186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.363 [2024-11-03 04:37:47.248213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.323 ms 00:17:24.363 [2024-11-03 04:37:47.248221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.363 [2024-11-03 04:37:47.248395] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.363 [2024-11-03 04:37:47.248433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.248982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.363 [2024-11-03 04:37:47.249254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.364 [2024-11-03 04:37:47.249480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.364 [2024-11-03 04:37:47.249492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:17:24.364 [2024-11-03 04:37:47.249509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.364 [2024-11-03 04:37:47.249523] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.364 [2024-11-03 04:37:47.249535] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.364 [2024-11-03 04:37:47.249546] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.364 [2024-11-03 04:37:47.249554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.364 [2024-11-03 04:37:47.249580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.364 [2024-11-03 04:37:47.249589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.364 [2024-11-03 04:37:47.249601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.364 [2024-11-03 04:37:47.249609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.364 [2024-11-03 04:37:47.249618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.364 [2024-11-03 04:37:47.249626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.364 [2024-11-03 04:37:47.249637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:17:24.364 [2024-11-03 04:37:47.249645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.264727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.364 [2024-11-03 04:37:47.264788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.364 [2024-11-03 04:37:47.264807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.030 ms 00:17:24.364 [2024-11-03 04:37:47.264816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.265306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.364 [2024-11-03 04:37:47.265332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.364 [2024-11-03 04:37:47.265345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:17:24.364 [2024-11-03 04:37:47.265356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.318312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.364 [2024-11-03 04:37:47.318366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.364 [2024-11-03 04:37:47.318382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.364 [2024-11-03 04:37:47.318392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.318501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.364 [2024-11-03 04:37:47.318512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.364 [2024-11-03 04:37:47.318524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.364 [2024-11-03 04:37:47.318533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.318621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.364 [2024-11-03 04:37:47.318634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.364 [2024-11-03 04:37:47.318649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.364 [2024-11-03 04:37:47.318657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.318679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.364 [2024-11-03 04:37:47.318688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.364 [2024-11-03 04:37:47.318699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.364 [2024-11-03 04:37:47.318707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.364 [2024-11-03 04:37:47.410725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.364 [2024-11-03 04:37:47.410789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.364 [2024-11-03 04:37:47.410807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.364 [2024-11-03 04:37:47.410816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.516713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.516815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.625 [2024-11-03 04:37:47.516840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.516854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.517028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.517053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.625 [2024-11-03 04:37:47.517077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.517091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.517148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.517162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.625 [2024-11-03 04:37:47.517178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.517192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.517367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.517395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.625 [2024-11-03 04:37:47.517417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.517432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.517497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.517515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.625 [2024-11-03 04:37:47.517534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.517547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.517677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.517694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.625 [2024-11-03 04:37:47.517722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.517735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.517830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.625 [2024-11-03 04:37:47.517848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.625 [2024-11-03 04:37:47.517864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.625 [2024-11-03 04:37:47.517876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.625 [2024-11-03 04:37:47.518148] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.972 ms, result 0 00:17:25.569 04:37:48 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.569 [2024-11-03 04:37:48.420276] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:25.569 [2024-11-03 04:37:48.420438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74255 ] 00:17:25.569 [2024-11-03 04:37:48.589078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.829 [2024-11-03 04:37:48.736171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.090 [2024-11-03 04:37:49.074428] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.090 [2024-11-03 04:37:49.074530] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.352 [2024-11-03 04:37:49.241516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.241612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.352 [2024-11-03 04:37:49.241632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.352 [2024-11-03 04:37:49.241641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.245065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.245124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.352 [2024-11-03 04:37:49.245137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.399 ms 00:17:26.352 [2024-11-03 04:37:49.245147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.245285] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.352 [2024-11-03 04:37:49.246407] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.352 [2024-11-03 04:37:49.246480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.246492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.352 [2024-11-03 04:37:49.246504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:17:26.352 [2024-11-03 04:37:49.246513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.249161] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.352 [2024-11-03 04:37:49.264454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.264824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.352 [2024-11-03 04:37:49.264863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.296 ms 00:17:26.352 [2024-11-03 04:37:49.264872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.265003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.265017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.352 [2024-11-03 04:37:49.265027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:26.352 [2024-11-03 04:37:49.265038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.276800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.276851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.352 [2024-11-03 04:37:49.276864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.713 ms 00:17:26.352 [2024-11-03 04:37:49.276873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.277010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.277022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.352 [2024-11-03 04:37:49.277033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:26.352 [2024-11-03 04:37:49.277042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.277070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.352 [2024-11-03 04:37:49.277081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.352 [2024-11-03 04:37:49.277095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:26.352 [2024-11-03 04:37:49.277104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.352 [2024-11-03 04:37:49.277129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.352 [2024-11-03 04:37:49.281974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.353 [2024-11-03 04:37:49.282019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.353 [2024-11-03 04:37:49.282030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:17:26.353 [2024-11-03 04:37:49.282039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.353 [2024-11-03 04:37:49.282110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.353 [2024-11-03 04:37:49.282121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.353 [2024-11-03 04:37:49.282131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:26.353 [2024-11-03 04:37:49.282140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.353 [2024-11-03 04:37:49.282160] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.353 [2024-11-03 04:37:49.282186] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.353 [2024-11-03 04:37:49.282232] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.353 [2024-11-03 04:37:49.282251] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.353 [2024-11-03 04:37:49.282363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.353 [2024-11-03 04:37:49.282376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.353 [2024-11-03 04:37:49.282389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.353 [2024-11-03 04:37:49.282401] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282412] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282425] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.353 [2024-11-03 04:37:49.282434] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.353 [2024-11-03 04:37:49.282442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.353 [2024-11-03 04:37:49.282451] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.353 [2024-11-03 04:37:49.282460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.353 [2024-11-03 04:37:49.282470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.353 [2024-11-03 04:37:49.282481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:26.353 [2024-11-03 04:37:49.282491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.353 [2024-11-03 04:37:49.282607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.353 [2024-11-03 04:37:49.282617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.353 [2024-11-03 04:37:49.282628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:26.353 [2024-11-03 04:37:49.282638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.353 [2024-11-03 04:37:49.282741] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.353 [2024-11-03 04:37:49.282755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.353 [2024-11-03 04:37:49.282764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.353 [2024-11-03 04:37:49.282789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.353 [2024-11-03 04:37:49.282817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.353 [2024-11-03 04:37:49.282833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.353 [2024-11-03 04:37:49.282842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.353 [2024-11-03 04:37:49.282849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.353 [2024-11-03 04:37:49.282866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.353 [2024-11-03 04:37:49.282874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.353 [2024-11-03 04:37:49.282881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.353 [2024-11-03 04:37:49.282896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.353 [2024-11-03 04:37:49.282919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.353 [2024-11-03 04:37:49.282939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.353 [2024-11-03 04:37:49.282959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.353 [2024-11-03 04:37:49.282982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.353 [2024-11-03 04:37:49.282988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.353 [2024-11-03 04:37:49.282995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.353 [2024-11-03 04:37:49.283002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.353 [2024-11-03 04:37:49.283009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.353 [2024-11-03 04:37:49.283016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.353 [2024-11-03 04:37:49.283023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.353 [2024-11-03 04:37:49.283031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.353 [2024-11-03 04:37:49.283038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.353 [2024-11-03 04:37:49.283045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.353 [2024-11-03 04:37:49.283051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.283058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.353 [2024-11-03 04:37:49.283064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.353 [2024-11-03 04:37:49.283071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.283083] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.353 [2024-11-03 04:37:49.283093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.353 [2024-11-03 04:37:49.283101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.353 [2024-11-03 04:37:49.283109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.353 [2024-11-03 04:37:49.283120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.353 [2024-11-03 04:37:49.283128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.353 [2024-11-03 04:37:49.283135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.353 [2024-11-03 04:37:49.283142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.353 [2024-11-03 04:37:49.283149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.353 [2024-11-03 04:37:49.283157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.353 [2024-11-03 04:37:49.283166] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.353 [2024-11-03 04:37:49.283176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.353 [2024-11-03 04:37:49.283185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.353 [2024-11-03 04:37:49.283192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.353 [2024-11-03 04:37:49.283199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.353 [2024-11-03 04:37:49.283206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.353 [2024-11-03 04:37:49.283214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.353 [2024-11-03 04:37:49.283222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.353 [2024-11-03 04:37:49.283229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.353 [2024-11-03 04:37:49.283236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.353 [2024-11-03 04:37:49.283244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.353 [2024-11-03 04:37:49.283250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.354 [2024-11-03 04:37:49.283258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.354 [2024-11-03 04:37:49.283266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.354 [2024-11-03 04:37:49.283274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.354 [2024-11-03 04:37:49.283281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.354 [2024-11-03 04:37:49.283289] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.354 [2024-11-03 04:37:49.283297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.354 [2024-11-03 04:37:49.283306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.354 [2024-11-03 04:37:49.283314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.354 [2024-11-03 04:37:49.283321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.354 [2024-11-03 04:37:49.283328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.354 [2024-11-03 04:37:49.283341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.283349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.354 [2024-11-03 04:37:49.283358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:17:26.354 [2024-11-03 04:37:49.283369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.322700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.322924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.354 [2024-11-03 04:37:49.323206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.273 ms 00:17:26.354 [2024-11-03 04:37:49.323268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.323447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.323994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.354 [2024-11-03 04:37:49.324164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:26.354 [2024-11-03 04:37:49.324199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.379546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.379782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.354 [2024-11-03 04:37:49.380065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.280 ms 00:17:26.354 [2024-11-03 04:37:49.380138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.380312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.380436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.354 [2024-11-03 04:37:49.380463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.354 [2024-11-03 04:37:49.380485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.381749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.381939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.354 [2024-11-03 04:37:49.382006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:17:26.354 [2024-11-03 04:37:49.382033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.382241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.382269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.354 [2024-11-03 04:37:49.382281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:17:26.354 [2024-11-03 04:37:49.382290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.401725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.401896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.354 [2024-11-03 04:37:49.401957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.410 ms 00:17:26.354 [2024-11-03 04:37:49.401983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.354 [2024-11-03 04:37:49.417880] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:26.354 [2024-11-03 04:37:49.418097] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.354 [2024-11-03 04:37:49.418164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.354 [2024-11-03 04:37:49.418187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.354 [2024-11-03 04:37:49.418210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.018 ms 00:17:26.354 [2024-11-03 04:37:49.418229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.624 [2024-11-03 04:37:49.445433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.624 [2024-11-03 04:37:49.445647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.624 [2024-11-03 04:37:49.445713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.095 ms 00:17:26.624 [2024-11-03 04:37:49.445737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.624 [2024-11-03 04:37:49.459406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.624 [2024-11-03 04:37:49.459608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.624 [2024-11-03 04:37:49.459673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.448 ms 00:17:26.624 [2024-11-03 04:37:49.459697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.624 [2024-11-03 04:37:49.472883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.624 [2024-11-03 04:37:49.473069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.624 [2024-11-03 04:37:49.473132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.084 ms 00:17:26.624 [2024-11-03 04:37:49.473154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.624 [2024-11-03 04:37:49.473992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.624 [2024-11-03 04:37:49.474159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.624 [2024-11-03 04:37:49.474226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:17:26.624 [2024-11-03 04:37:49.474250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.624 [2024-11-03 04:37:49.549988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.624 [2024-11-03 04:37:49.550198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.624 [2024-11-03 04:37:49.550261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.665 ms 00:17:26.624 [2024-11-03 04:37:49.550285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.624 [2024-11-03 04:37:49.562500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.624 [2024-11-03 04:37:49.588125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.624 [2024-11-03 04:37:49.588316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.624 [2024-11-03 04:37:49.588376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.691 ms 00:17:26.625 [2024-11-03 04:37:49.588402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.588539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.625 [2024-11-03 04:37:49.588612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.625 [2024-11-03 04:37:49.588635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:26.625 [2024-11-03 04:37:49.588656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.588740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.625 [2024-11-03 04:37:49.588880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.625 [2024-11-03 04:37:49.588902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:26.625 [2024-11-03 04:37:49.588924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.588970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.625 [2024-11-03 04:37:49.588995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.625 [2024-11-03 04:37:49.589080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:26.625 [2024-11-03 04:37:49.589108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.589170] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.625 [2024-11-03 04:37:49.589197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.625 [2024-11-03 04:37:49.589218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.625 [2024-11-03 04:37:49.589240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:26.625 [2024-11-03 04:37:49.589333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.617185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.625 [2024-11-03 04:37:49.617245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.625 [2024-11-03 04:37:49.617261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.797 ms 00:17:26.625 [2024-11-03 04:37:49.617271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.617422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.625 [2024-11-03 04:37:49.617438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.625 [2024-11-03 04:37:49.617450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:26.625 [2024-11-03 04:37:49.617459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.625 [2024-11-03 04:37:49.618820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.625 [2024-11-03 04:37:49.622623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.878 ms, result 0 00:17:26.625 [2024-11-03 04:37:49.623818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.625 [2024-11-03 04:37:49.638098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.016  [2024-11-03T04:37:52.043Z] Copying: 14/256 [MB] (14 MBps) [2024-11-03T04:37:52.988Z] Copying: 28/256 [MB] (13 MBps) [2024-11-03T04:37:53.935Z] Copying: 40/256 [MB] (11 MBps) [2024-11-03T04:37:54.881Z] Copying: 51/256 [MB] (11 MBps) [2024-11-03T04:37:55.823Z] Copying: 63/256 [MB] (11 MBps) [2024-11-03T04:37:56.767Z] Copying: 74/256 [MB] (11 MBps) [2024-11-03T04:37:57.708Z] Copying: 86/256 [MB] (11 MBps) [2024-11-03T04:37:59.095Z] Copying: 98/256 [MB] (11 MBps) [2024-11-03T04:38:00.039Z] Copying: 108/256 [MB] (10 MBps) [2024-11-03T04:38:00.983Z] Copying: 120/256 [MB] (11 MBps) [2024-11-03T04:38:01.926Z] Copying: 132/256 [MB] (11 MBps) [2024-11-03T04:38:02.870Z] Copying: 143/256 [MB] (11 MBps) [2024-11-03T04:38:03.816Z] Copying: 155/256 [MB] (11 MBps) [2024-11-03T04:38:04.760Z] Copying: 166/256 [MB] (11 MBps) [2024-11-03T04:38:05.705Z] Copying: 177/256 [MB] (10 MBps) [2024-11-03T04:38:07.092Z] Copying: 189/256 [MB] (11 MBps) [2024-11-03T04:38:08.035Z] Copying: 201/256 [MB] (11 MBps) [2024-11-03T04:38:08.979Z] Copying: 212/256 [MB] (11 MBps) [2024-11-03T04:38:09.923Z] Copying: 223/256 [MB] (10 MBps) [2024-11-03T04:38:10.865Z] Copying: 234/256 [MB] (11 MBps) [2024-11-03T04:38:11.808Z] Copying: 245/256 [MB] (11 MBps) [2024-11-03T04:38:11.808Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-03 04:38:11.788214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.724 [2024-11-03 04:38:11.796686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.724 [2024-11-03 04:38:11.796717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.724 [2024-11-03 04:38:11.796730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.724 [2024-11-03 04:38:11.796737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.724 [2024-11-03 04:38:11.796765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.724 [2024-11-03 04:38:11.799420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.724 [2024-11-03 04:38:11.799452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.724 [2024-11-03 04:38:11.799461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:17:48.724 [2024-11-03 04:38:11.799467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.724 [2024-11-03 04:38:11.799806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.724 [2024-11-03 04:38:11.799837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.724 [2024-11-03 04:38:11.799855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:48.724 [2024-11-03 04:38:11.799870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.724 [2024-11-03 04:38:11.802718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.724 [2024-11-03 04:38:11.802735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.724 [2024-11-03 04:38:11.802746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:17:48.724 [2024-11-03 04:38:11.802753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.808861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.808883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:48.987 [2024-11-03 04:38:11.808891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:17:48.987 [2024-11-03 04:38:11.808897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.827693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.827799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.987 [2024-11-03 04:38:11.827812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.757 ms 00:17:48.987 [2024-11-03 04:38:11.827819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.839302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.839403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.987 [2024-11-03 04:38:11.839421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.455 ms 00:17:48.987 [2024-11-03 04:38:11.839427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.839527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.839534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.987 [2024-11-03 04:38:11.839541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:48.987 [2024-11-03 04:38:11.839547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.858158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.858254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:48.987 [2024-11-03 04:38:11.858267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.578 ms 00:17:48.987 [2024-11-03 04:38:11.858273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.876401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.876425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:48.987 [2024-11-03 04:38:11.876433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.102 ms 00:17:48.987 [2024-11-03 04:38:11.876439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.894066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.894090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.987 [2024-11-03 04:38:11.894098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.600 ms 00:17:48.987 [2024-11-03 04:38:11.894104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.911737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.987 [2024-11-03 04:38:11.911829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.987 [2024-11-03 04:38:11.911841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.572 ms 00:17:48.987 [2024-11-03 04:38:11.911847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.987 [2024-11-03 04:38:11.911872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.987 [2024-11-03 04:38:11.911888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.911996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.987 [2024-11-03 04:38:11.912058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.988 [2024-11-03 04:38:11.912487] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.988 [2024-11-03 04:38:11.912494] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7c2ce644-fdde-421c-ba2f-76d9ec869ae4 00:17:48.988 [2024-11-03 04:38:11.912500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.988 [2024-11-03 04:38:11.912506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.988 [2024-11-03 04:38:11.912511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.988 [2024-11-03 04:38:11.912517] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.988 [2024-11-03 04:38:11.912524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.988 [2024-11-03 04:38:11.912531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.988 [2024-11-03 04:38:11.912536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.988 [2024-11-03 04:38:11.912541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.988 [2024-11-03 04:38:11.912546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.988 [2024-11-03 04:38:11.912552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.988 [2024-11-03 04:38:11.912570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.988 [2024-11-03 04:38:11.912577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:17:48.988 [2024-11-03 04:38:11.912585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.988 [2024-11-03 04:38:11.922834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.988 [2024-11-03 04:38:11.922920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.988 [2024-11-03 04:38:11.922931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.236 ms 00:17:48.988 [2024-11-03 04:38:11.922938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.988 [2024-11-03 04:38:11.923245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.988 [2024-11-03 04:38:11.923256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.988 [2024-11-03 04:38:11.923263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:17:48.988 [2024-11-03 04:38:11.923269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.988 [2024-11-03 04:38:11.952454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.988 [2024-11-03 04:38:11.952479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.989 [2024-11-03 04:38:11.952487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:11.952495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:11.952552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:11.952575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.989 [2024-11-03 04:38:11.952583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:11.952589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:11.952628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:11.952636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.989 [2024-11-03 04:38:11.952643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:11.952649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:11.952663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:11.952670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.989 [2024-11-03 04:38:11.952678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:11.952684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.015064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.015096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.989 [2024-11-03 04:38:12.015106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.015113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.065772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.065939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.989 [2024-11-03 04:38:12.065956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.065963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.066021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.989 [2024-11-03 04:38:12.066028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.066034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.066067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.989 [2024-11-03 04:38:12.066073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.066080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.066170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.989 [2024-11-03 04:38:12.066176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.066183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.066217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.989 [2024-11-03 04:38:12.066224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.066231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.066276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.989 [2024-11-03 04:38:12.066283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.066290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.989 [2024-11-03 04:38:12.066337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.989 [2024-11-03 04:38:12.066344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.989 [2024-11-03 04:38:12.066350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.989 [2024-11-03 04:38:12.066483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.773 ms, result 0 00:17:49.605 00:17:49.605 00:17:49.605 04:38:12 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:50.185 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:50.185 04:38:13 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74197 00:17:50.185 04:38:13 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74197 ']' 00:17:50.185 04:38:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74197 00:17:50.185 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74197) - No such process 00:17:50.185 Process with pid 74197 is not found 00:17:50.185 04:38:13 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74197 is not found' 00:17:50.185 ************************************ 00:17:50.185 END TEST ftl_trim 00:17:50.185 ************************************ 00:17:50.185 00:17:50.185 real 1m25.533s 00:17:50.185 user 1m41.476s 00:17:50.185 sys 0m16.480s 00:17:50.185 04:38:13 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:50.185 04:38:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:50.446 04:38:13 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:50.446 04:38:13 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:50.446 04:38:13 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:50.446 04:38:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:50.446 ************************************ 00:17:50.446 START TEST ftl_restore 00:17:50.446 ************************************ 00:17:50.446 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:50.446 * Looking for test storage... 00:17:50.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.447 04:38:13 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:50.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.447 --rc genhtml_branch_coverage=1 00:17:50.447 --rc genhtml_function_coverage=1 00:17:50.447 --rc genhtml_legend=1 00:17:50.447 --rc geninfo_all_blocks=1 00:17:50.447 --rc geninfo_unexecuted_blocks=1 00:17:50.447 00:17:50.447 ' 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:50.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.447 --rc genhtml_branch_coverage=1 00:17:50.447 --rc genhtml_function_coverage=1 00:17:50.447 --rc genhtml_legend=1 00:17:50.447 --rc geninfo_all_blocks=1 00:17:50.447 --rc geninfo_unexecuted_blocks=1 00:17:50.447 00:17:50.447 ' 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:50.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.447 --rc genhtml_branch_coverage=1 00:17:50.447 --rc genhtml_function_coverage=1 00:17:50.447 --rc genhtml_legend=1 00:17:50.447 --rc geninfo_all_blocks=1 00:17:50.447 --rc geninfo_unexecuted_blocks=1 00:17:50.447 00:17:50.447 ' 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:50.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.447 --rc genhtml_branch_coverage=1 00:17:50.447 --rc genhtml_function_coverage=1 00:17:50.447 --rc genhtml_legend=1 00:17:50.447 --rc geninfo_all_blocks=1 00:17:50.447 --rc geninfo_unexecuted_blocks=1 00:17:50.447 00:17:50.447 ' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.en6XMY3BIm 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74589 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74589 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74589 ']' 00:17:50.447 04:38:13 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.447 04:38:13 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:50.708 [2024-11-03 04:38:13.582806] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:50.708 [2024-11-03 04:38:13.583126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74589 ] 00:17:50.708 [2024-11-03 04:38:13.746751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.968 [2024-11-03 04:38:13.850324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.540 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:51.540 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:17:51.540 04:38:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:51.540 04:38:14 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:51.540 04:38:14 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:51.540 04:38:14 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:51.540 04:38:14 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:51.540 04:38:14 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:51.802 04:38:14 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:51.802 04:38:14 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:51.802 04:38:14 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:51.802 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:51.802 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:51.802 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:51.802 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:51.802 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:52.062 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:52.062 { 00:17:52.062 "name": "nvme0n1", 00:17:52.062 "aliases": [ 00:17:52.062 "15ee3059-f12b-423e-942e-9c00e1d8c864" 00:17:52.062 ], 00:17:52.062 "product_name": "NVMe disk", 00:17:52.062 "block_size": 4096, 00:17:52.062 "num_blocks": 1310720, 00:17:52.062 "uuid": "15ee3059-f12b-423e-942e-9c00e1d8c864", 00:17:52.063 "numa_id": -1, 00:17:52.063 "assigned_rate_limits": { 00:17:52.063 "rw_ios_per_sec": 0, 00:17:52.063 "rw_mbytes_per_sec": 0, 00:17:52.063 "r_mbytes_per_sec": 0, 00:17:52.063 "w_mbytes_per_sec": 0 00:17:52.063 }, 00:17:52.063 "claimed": true, 00:17:52.063 "claim_type": "read_many_write_one", 00:17:52.063 "zoned": false, 00:17:52.063 "supported_io_types": { 00:17:52.063 "read": true, 00:17:52.063 "write": true, 00:17:52.063 "unmap": true, 00:17:52.063 "flush": true, 00:17:52.063 "reset": true, 00:17:52.063 "nvme_admin": true, 00:17:52.063 "nvme_io": true, 00:17:52.063 "nvme_io_md": false, 00:17:52.063 "write_zeroes": true, 00:17:52.063 "zcopy": false, 00:17:52.063 "get_zone_info": false, 00:17:52.063 "zone_management": false, 00:17:52.063 "zone_append": false, 00:17:52.063 "compare": true, 00:17:52.063 "compare_and_write": false, 00:17:52.063 "abort": true, 00:17:52.063 "seek_hole": false, 00:17:52.063 "seek_data": false, 00:17:52.063 "copy": true, 00:17:52.063 "nvme_iov_md": false 00:17:52.063 }, 00:17:52.063 "driver_specific": { 00:17:52.063 "nvme": [ 00:17:52.063 { 00:17:52.063 "pci_address": "0000:00:11.0", 00:17:52.063 "trid": { 00:17:52.063 "trtype": "PCIe", 00:17:52.063 "traddr": "0000:00:11.0" 00:17:52.063 }, 00:17:52.063 "ctrlr_data": { 00:17:52.063 "cntlid": 0, 00:17:52.063 "vendor_id": "0x1b36", 00:17:52.063 "model_number": "QEMU NVMe Ctrl", 00:17:52.063 "serial_number": "12341", 00:17:52.063 "firmware_revision": "8.0.0", 00:17:52.063 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:52.063 "oacs": { 00:17:52.063 "security": 0, 00:17:52.063 "format": 1, 00:17:52.063 "firmware": 0, 00:17:52.063 "ns_manage": 1 00:17:52.063 }, 00:17:52.063 "multi_ctrlr": false, 00:17:52.063 "ana_reporting": false 00:17:52.063 }, 00:17:52.063 "vs": { 00:17:52.063 "nvme_version": "1.4" 00:17:52.063 }, 00:17:52.063 "ns_data": { 00:17:52.063 "id": 1, 00:17:52.063 "can_share": false 00:17:52.063 } 00:17:52.063 } 00:17:52.063 ], 00:17:52.063 "mp_policy": "active_passive" 00:17:52.063 } 00:17:52.063 } 00:17:52.063 ]' 00:17:52.063 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:52.063 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:52.063 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:52.063 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:52.063 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:52.063 04:38:14 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:17:52.063 04:38:14 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:52.063 04:38:14 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:52.063 04:38:14 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:52.063 04:38:14 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.063 04:38:14 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:52.324 04:38:15 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=4bdf8c97-c6b2-413a-833b-2df7ecc846bc 00:17:52.324 04:38:15 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:52.324 04:38:15 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bdf8c97-c6b2-413a-833b-2df7ecc846bc 00:17:52.324 04:38:15 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:52.585 04:38:15 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=60d4ea42-5c3b-4cd8-a828-a08d077aa7f9 00:17:52.585 04:38:15 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 60d4ea42-5c3b-4cd8-a828-a08d077aa7f9 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=821d20bc-5624-480d-acff-54e3c92b822b 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 821d20bc-5624-480d-acff-54e3c92b822b 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=821d20bc-5624-480d-acff-54e3c92b822b 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:52.846 04:38:15 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 821d20bc-5624-480d-acff-54e3c92b822b 00:17:52.846 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=821d20bc-5624-480d-acff-54e3c92b822b 00:17:52.846 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:52.846 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:52.846 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:52.846 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 821d20bc-5624-480d-acff-54e3c92b822b 00:17:53.106 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:53.106 { 00:17:53.106 "name": "821d20bc-5624-480d-acff-54e3c92b822b", 00:17:53.106 "aliases": [ 00:17:53.106 "lvs/nvme0n1p0" 00:17:53.106 ], 00:17:53.106 "product_name": "Logical Volume", 00:17:53.106 "block_size": 4096, 00:17:53.106 "num_blocks": 26476544, 00:17:53.106 "uuid": "821d20bc-5624-480d-acff-54e3c92b822b", 00:17:53.106 "assigned_rate_limits": { 00:17:53.106 "rw_ios_per_sec": 0, 00:17:53.106 "rw_mbytes_per_sec": 0, 00:17:53.106 "r_mbytes_per_sec": 0, 00:17:53.106 "w_mbytes_per_sec": 0 00:17:53.106 }, 00:17:53.106 "claimed": false, 00:17:53.106 "zoned": false, 00:17:53.106 "supported_io_types": { 00:17:53.106 "read": true, 00:17:53.106 "write": true, 00:17:53.106 "unmap": true, 00:17:53.106 "flush": false, 00:17:53.106 "reset": true, 00:17:53.106 "nvme_admin": false, 00:17:53.106 "nvme_io": false, 00:17:53.106 "nvme_io_md": false, 00:17:53.106 "write_zeroes": true, 00:17:53.106 "zcopy": false, 00:17:53.106 "get_zone_info": false, 00:17:53.106 "zone_management": false, 00:17:53.106 "zone_append": false, 00:17:53.106 "compare": false, 00:17:53.106 "compare_and_write": false, 00:17:53.106 "abort": false, 00:17:53.106 "seek_hole": true, 00:17:53.106 "seek_data": true, 00:17:53.106 "copy": false, 00:17:53.106 "nvme_iov_md": false 00:17:53.106 }, 00:17:53.106 "driver_specific": { 00:17:53.106 "lvol": { 00:17:53.106 "lvol_store_uuid": "60d4ea42-5c3b-4cd8-a828-a08d077aa7f9", 00:17:53.106 "base_bdev": "nvme0n1", 00:17:53.106 "thin_provision": true, 00:17:53.106 "num_allocated_clusters": 0, 00:17:53.106 "snapshot": false, 00:17:53.106 "clone": false, 00:17:53.106 "esnap_clone": false 00:17:53.106 } 00:17:53.106 } 00:17:53.106 } 00:17:53.106 ]' 00:17:53.106 04:38:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:53.106 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:53.106 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:53.106 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:53.106 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:53.106 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:53.106 04:38:16 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:53.106 04:38:16 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:53.106 04:38:16 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:53.368 04:38:16 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:53.368 04:38:16 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:53.368 04:38:16 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 821d20bc-5624-480d-acff-54e3c92b822b 00:17:53.368 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=821d20bc-5624-480d-acff-54e3c92b822b 00:17:53.368 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:53.368 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:53.368 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:53.368 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 821d20bc-5624-480d-acff-54e3c92b822b 00:17:53.629 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:53.629 { 00:17:53.629 "name": "821d20bc-5624-480d-acff-54e3c92b822b", 00:17:53.629 "aliases": [ 00:17:53.629 "lvs/nvme0n1p0" 00:17:53.629 ], 00:17:53.629 "product_name": "Logical Volume", 00:17:53.629 "block_size": 4096, 00:17:53.629 "num_blocks": 26476544, 00:17:53.629 "uuid": "821d20bc-5624-480d-acff-54e3c92b822b", 00:17:53.629 "assigned_rate_limits": { 00:17:53.629 "rw_ios_per_sec": 0, 00:17:53.629 "rw_mbytes_per_sec": 0, 00:17:53.629 "r_mbytes_per_sec": 0, 00:17:53.629 "w_mbytes_per_sec": 0 00:17:53.629 }, 00:17:53.629 "claimed": false, 00:17:53.629 "zoned": false, 00:17:53.629 "supported_io_types": { 00:17:53.629 "read": true, 00:17:53.629 "write": true, 00:17:53.629 "unmap": true, 00:17:53.629 "flush": false, 00:17:53.629 "reset": true, 00:17:53.629 "nvme_admin": false, 00:17:53.629 "nvme_io": false, 00:17:53.629 "nvme_io_md": false, 00:17:53.629 "write_zeroes": true, 00:17:53.629 "zcopy": false, 00:17:53.629 "get_zone_info": false, 00:17:53.629 "zone_management": false, 00:17:53.629 "zone_append": false, 00:17:53.629 "compare": false, 00:17:53.629 "compare_and_write": false, 00:17:53.629 "abort": false, 00:17:53.629 "seek_hole": true, 00:17:53.629 "seek_data": true, 00:17:53.629 "copy": false, 00:17:53.629 "nvme_iov_md": false 00:17:53.629 }, 00:17:53.629 "driver_specific": { 00:17:53.629 "lvol": { 00:17:53.629 "lvol_store_uuid": "60d4ea42-5c3b-4cd8-a828-a08d077aa7f9", 00:17:53.630 "base_bdev": "nvme0n1", 00:17:53.630 "thin_provision": true, 00:17:53.630 "num_allocated_clusters": 0, 00:17:53.630 "snapshot": false, 00:17:53.630 "clone": false, 00:17:53.630 "esnap_clone": false 00:17:53.630 } 00:17:53.630 } 00:17:53.630 } 00:17:53.630 ]' 00:17:53.630 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:53.630 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:53.630 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:53.630 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:53.630 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:53.630 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:53.630 04:38:16 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:53.630 04:38:16 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:53.891 04:38:16 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:53.891 04:38:16 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 821d20bc-5624-480d-acff-54e3c92b822b 00:17:53.891 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=821d20bc-5624-480d-acff-54e3c92b822b 00:17:53.891 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:53.891 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:53.891 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:53.891 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 821d20bc-5624-480d-acff-54e3c92b822b 00:17:54.152 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:54.152 { 00:17:54.152 "name": "821d20bc-5624-480d-acff-54e3c92b822b", 00:17:54.152 "aliases": [ 00:17:54.152 "lvs/nvme0n1p0" 00:17:54.152 ], 00:17:54.152 "product_name": "Logical Volume", 00:17:54.152 "block_size": 4096, 00:17:54.152 "num_blocks": 26476544, 00:17:54.152 "uuid": "821d20bc-5624-480d-acff-54e3c92b822b", 00:17:54.152 "assigned_rate_limits": { 00:17:54.152 "rw_ios_per_sec": 0, 00:17:54.152 "rw_mbytes_per_sec": 0, 00:17:54.152 "r_mbytes_per_sec": 0, 00:17:54.152 "w_mbytes_per_sec": 0 00:17:54.152 }, 00:17:54.152 "claimed": false, 00:17:54.152 "zoned": false, 00:17:54.152 "supported_io_types": { 00:17:54.152 "read": true, 00:17:54.152 "write": true, 00:17:54.152 "unmap": true, 00:17:54.152 "flush": false, 00:17:54.152 "reset": true, 00:17:54.152 "nvme_admin": false, 00:17:54.152 "nvme_io": false, 00:17:54.152 "nvme_io_md": false, 00:17:54.152 "write_zeroes": true, 00:17:54.152 "zcopy": false, 00:17:54.152 "get_zone_info": false, 00:17:54.152 "zone_management": false, 00:17:54.152 "zone_append": false, 00:17:54.152 "compare": false, 00:17:54.152 "compare_and_write": false, 00:17:54.152 "abort": false, 00:17:54.152 "seek_hole": true, 00:17:54.152 "seek_data": true, 00:17:54.152 "copy": false, 00:17:54.152 "nvme_iov_md": false 00:17:54.152 }, 00:17:54.152 "driver_specific": { 00:17:54.152 "lvol": { 00:17:54.152 "lvol_store_uuid": "60d4ea42-5c3b-4cd8-a828-a08d077aa7f9", 00:17:54.152 "base_bdev": "nvme0n1", 00:17:54.152 "thin_provision": true, 00:17:54.152 "num_allocated_clusters": 0, 00:17:54.152 "snapshot": false, 00:17:54.152 "clone": false, 00:17:54.152 "esnap_clone": false 00:17:54.152 } 00:17:54.152 } 00:17:54.152 } 00:17:54.152 ]' 00:17:54.152 04:38:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:54.152 04:38:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:54.152 04:38:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:54.152 04:38:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:54.152 04:38:17 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:54.152 04:38:17 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 821d20bc-5624-480d-acff-54e3c92b822b --l2p_dram_limit 10' 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:54.152 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:54.152 04:38:17 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 821d20bc-5624-480d-acff-54e3c92b822b --l2p_dram_limit 10 -c nvc0n1p0 00:17:54.152 [2024-11-03 04:38:17.224965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.152 [2024-11-03 04:38:17.225006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.152 [2024-11-03 04:38:17.225022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.152 [2024-11-03 04:38:17.225028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.152 [2024-11-03 04:38:17.225067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.152 [2024-11-03 04:38:17.225075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.152 [2024-11-03 04:38:17.225084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:54.152 [2024-11-03 04:38:17.225101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.152 [2024-11-03 04:38:17.225121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.152 [2024-11-03 04:38:17.225643] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.152 [2024-11-03 04:38:17.225661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.152 [2024-11-03 04:38:17.225667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.152 [2024-11-03 04:38:17.225676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:54.152 [2024-11-03 04:38:17.225681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.152 [2024-11-03 04:38:17.225706] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 73eb68bc-cabf-4fe5-8508-e3167e0524d2 00:17:54.152 [2024-11-03 04:38:17.226957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.152 [2024-11-03 04:38:17.226983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:54.152 [2024-11-03 04:38:17.226991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:54.152 [2024-11-03 04:38:17.227000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.233829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.233857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.415 [2024-11-03 04:38:17.233865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.788 ms 00:17:54.415 [2024-11-03 04:38:17.233874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.233974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.233984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.415 [2024-11-03 04:38:17.233991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:54.415 [2024-11-03 04:38:17.234001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.234036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.234046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.415 [2024-11-03 04:38:17.234053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.415 [2024-11-03 04:38:17.234061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.234080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.415 [2024-11-03 04:38:17.237332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.237357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.415 [2024-11-03 04:38:17.237368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:17:54.415 [2024-11-03 04:38:17.237376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.237406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.237412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.415 [2024-11-03 04:38:17.237420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.415 [2024-11-03 04:38:17.237426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.237441] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:54.415 [2024-11-03 04:38:17.237553] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.415 [2024-11-03 04:38:17.237578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.415 [2024-11-03 04:38:17.237588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.415 [2024-11-03 04:38:17.237597] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237604] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237613] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.415 [2024-11-03 04:38:17.237619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.415 [2024-11-03 04:38:17.237626] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.415 [2024-11-03 04:38:17.237632] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.415 [2024-11-03 04:38:17.237641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.237647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.415 [2024-11-03 04:38:17.237657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:54.415 [2024-11-03 04:38:17.237667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.237733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.415 [2024-11-03 04:38:17.237740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.415 [2024-11-03 04:38:17.237748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:54.415 [2024-11-03 04:38:17.237753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.415 [2024-11-03 04:38:17.237829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.415 [2024-11-03 04:38:17.237838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.415 [2024-11-03 04:38:17.237846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.415 [2024-11-03 04:38:17.237865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.415 [2024-11-03 04:38:17.237885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.415 [2024-11-03 04:38:17.237897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.415 [2024-11-03 04:38:17.237904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.415 [2024-11-03 04:38:17.237911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.415 [2024-11-03 04:38:17.237916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.415 [2024-11-03 04:38:17.237923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:54.415 [2024-11-03 04:38:17.237929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.415 [2024-11-03 04:38:17.237944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.415 [2024-11-03 04:38:17.237965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.415 [2024-11-03 04:38:17.237983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.415 [2024-11-03 04:38:17.237989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.415 [2024-11-03 04:38:17.237995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.415 [2024-11-03 04:38:17.238002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.415 [2024-11-03 04:38:17.238007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.415 [2024-11-03 04:38:17.238013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.415 [2024-11-03 04:38:17.238019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:54.415 [2024-11-03 04:38:17.238025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.415 [2024-11-03 04:38:17.238031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.415 [2024-11-03 04:38:17.238039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:54.415 [2024-11-03 04:38:17.238044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.415 [2024-11-03 04:38:17.238050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.415 [2024-11-03 04:38:17.238055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:54.415 [2024-11-03 04:38:17.238062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.415 [2024-11-03 04:38:17.238067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.416 [2024-11-03 04:38:17.238073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:54.416 [2024-11-03 04:38:17.238079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.416 [2024-11-03 04:38:17.238086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.416 [2024-11-03 04:38:17.238091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:54.416 [2024-11-03 04:38:17.238097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.416 [2024-11-03 04:38:17.238102] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.416 [2024-11-03 04:38:17.238111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.416 [2024-11-03 04:38:17.238117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.416 [2024-11-03 04:38:17.238124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.416 [2024-11-03 04:38:17.238131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.416 [2024-11-03 04:38:17.238139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.416 [2024-11-03 04:38:17.238143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.416 [2024-11-03 04:38:17.238150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.416 [2024-11-03 04:38:17.238154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.416 [2024-11-03 04:38:17.238162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.416 [2024-11-03 04:38:17.238171] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.416 [2024-11-03 04:38:17.238183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.416 [2024-11-03 04:38:17.238198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:54.416 [2024-11-03 04:38:17.238203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:54.416 [2024-11-03 04:38:17.238211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:54.416 [2024-11-03 04:38:17.238216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:54.416 [2024-11-03 04:38:17.238223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:54.416 [2024-11-03 04:38:17.238229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:54.416 [2024-11-03 04:38:17.238236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:54.416 [2024-11-03 04:38:17.238242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:54.416 [2024-11-03 04:38:17.238251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:54.416 [2024-11-03 04:38:17.238282] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.416 [2024-11-03 04:38:17.238289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.416 [2024-11-03 04:38:17.238304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.416 [2024-11-03 04:38:17.238310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.416 [2024-11-03 04:38:17.238317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.416 [2024-11-03 04:38:17.238323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.416 [2024-11-03 04:38:17.238330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.416 [2024-11-03 04:38:17.238336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:17:54.416 [2024-11-03 04:38:17.238344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.416 [2024-11-03 04:38:17.238387] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:54.416 [2024-11-03 04:38:17.238400] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:57.715 [2024-11-03 04:38:20.745250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.715 [2024-11-03 04:38:20.745310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:57.715 [2024-11-03 04:38:20.745323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3506.847 ms 00:17:57.715 [2024-11-03 04:38:20.745332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.715 [2024-11-03 04:38:20.769331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.715 [2024-11-03 04:38:20.769376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.715 [2024-11-03 04:38:20.769388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.820 ms 00:17:57.715 [2024-11-03 04:38:20.769397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.715 [2024-11-03 04:38:20.769490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.715 [2024-11-03 04:38:20.769500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:57.715 [2024-11-03 04:38:20.769507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:57.715 [2024-11-03 04:38:20.769517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.715 [2024-11-03 04:38:20.796358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.715 [2024-11-03 04:38:20.796390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.715 [2024-11-03 04:38:20.796400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.801 ms 00:17:57.715 [2024-11-03 04:38:20.796408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.716 [2024-11-03 04:38:20.796432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.716 [2024-11-03 04:38:20.796442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.716 [2024-11-03 04:38:20.796448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.716 [2024-11-03 04:38:20.796458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.716 [2024-11-03 04:38:20.796896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.716 [2024-11-03 04:38:20.796915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.716 [2024-11-03 04:38:20.796923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:17:57.716 [2024-11-03 04:38:20.796932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.716 [2024-11-03 04:38:20.797015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.716 [2024-11-03 04:38:20.797025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.716 [2024-11-03 04:38:20.797032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:57.716 [2024-11-03 04:38:20.797043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.810227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.810256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.976 [2024-11-03 04:38:20.810265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.167 ms 00:17:57.976 [2024-11-03 04:38:20.810276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.821961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:57.976 [2024-11-03 04:38:20.825120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.825146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:57.976 [2024-11-03 04:38:20.825157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.785 ms 00:17:57.976 [2024-11-03 04:38:20.825163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.898125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.898246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:57.976 [2024-11-03 04:38:20.898303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.938 ms 00:17:57.976 [2024-11-03 04:38:20.898323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.898494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.898519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.976 [2024-11-03 04:38:20.898595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:57.976 [2024-11-03 04:38:20.898618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.916797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.916892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:57.976 [2024-11-03 04:38:20.916940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.148 ms 00:17:57.976 [2024-11-03 04:38:20.916960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.934263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.934360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:57.976 [2024-11-03 04:38:20.934417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.267 ms 00:17:57.976 [2024-11-03 04:38:20.934433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.934901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.934935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.976 [2024-11-03 04:38:20.934954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:17:57.976 [2024-11-03 04:38:20.935023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:20.997828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:20.997925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:57.976 [2024-11-03 04:38:20.997972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.766 ms 00:17:57.976 [2024-11-03 04:38:20.997991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:21.018198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:21.018291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:57.976 [2024-11-03 04:38:21.018339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.124 ms 00:17:57.976 [2024-11-03 04:38:21.018358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:21.036710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:21.036767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:57.976 [2024-11-03 04:38:21.036792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.315 ms 00:17:57.976 [2024-11-03 04:38:21.036808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:21.055993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:21.056087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.976 [2024-11-03 04:38:21.056131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.142 ms 00:17:57.976 [2024-11-03 04:38:21.056148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:21.056209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:21.056229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.976 [2024-11-03 04:38:21.056251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:57.976 [2024-11-03 04:38:21.056266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:21.056343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.976 [2024-11-03 04:38:21.056409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.976 [2024-11-03 04:38:21.056431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:57.976 [2024-11-03 04:38:21.056447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.976 [2024-11-03 04:38:21.057305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3831.958 ms, result 0 00:17:58.236 { 00:17:58.236 "name": "ftl0", 00:17:58.236 "uuid": "73eb68bc-cabf-4fe5-8508-e3167e0524d2" 00:17:58.236 } 00:17:58.236 04:38:21 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:58.236 04:38:21 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:58.236 04:38:21 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:58.236 04:38:21 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:58.497 [2024-11-03 04:38:21.464685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.464718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.497 [2024-11-03 04:38:21.464728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:58.497 [2024-11-03 04:38:21.464745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.464779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.497 [2024-11-03 04:38:21.467028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.467129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.497 [2024-11-03 04:38:21.467146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:17:58.497 [2024-11-03 04:38:21.467153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.467357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.467366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.497 [2024-11-03 04:38:21.467374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:58.497 [2024-11-03 04:38:21.467382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.469838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.469855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.497 [2024-11-03 04:38:21.469863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:17:58.497 [2024-11-03 04:38:21.469869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.474526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.474547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:58.497 [2024-11-03 04:38:21.474565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.641 ms 00:17:58.497 [2024-11-03 04:38:21.474573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.492769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.492865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:58.497 [2024-11-03 04:38:21.492881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.134 ms 00:17:58.497 [2024-11-03 04:38:21.492886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.506250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.506280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:58.497 [2024-11-03 04:38:21.506292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.334 ms 00:17:58.497 [2024-11-03 04:38:21.506299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.506413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.506422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:58.497 [2024-11-03 04:38:21.506431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:58.497 [2024-11-03 04:38:21.506437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.524858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.524883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:58.497 [2024-11-03 04:38:21.524893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.405 ms 00:17:58.497 [2024-11-03 04:38:21.524898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.542887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.542911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:58.497 [2024-11-03 04:38:21.542921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:17:58.497 [2024-11-03 04:38:21.542927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.497 [2024-11-03 04:38:21.560474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.497 [2024-11-03 04:38:21.560579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:58.497 [2024-11-03 04:38:21.560594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.514 ms 00:17:58.497 [2024-11-03 04:38:21.560600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.759 [2024-11-03 04:38:21.578565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.759 [2024-11-03 04:38:21.578588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:58.759 [2024-11-03 04:38:21.578599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.905 ms 00:17:58.759 [2024-11-03 04:38:21.578604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.759 [2024-11-03 04:38:21.578634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:58.759 [2024-11-03 04:38:21.578646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:58.759 [2024-11-03 04:38:21.578854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.578994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:58.760 [2024-11-03 04:38:21.579350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:58.760 [2024-11-03 04:38:21.579358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73eb68bc-cabf-4fe5-8508-e3167e0524d2 00:17:58.760 [2024-11-03 04:38:21.579364] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:58.760 [2024-11-03 04:38:21.579375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:58.760 [2024-11-03 04:38:21.579381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:58.760 [2024-11-03 04:38:21.579388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:58.760 [2024-11-03 04:38:21.579397] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:58.760 [2024-11-03 04:38:21.579404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:58.760 [2024-11-03 04:38:21.579410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:58.760 [2024-11-03 04:38:21.579416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:58.760 [2024-11-03 04:38:21.579421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:58.760 [2024-11-03 04:38:21.579428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.760 [2024-11-03 04:38:21.579434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:58.760 [2024-11-03 04:38:21.579442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:17:58.760 [2024-11-03 04:38:21.579448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.760 [2024-11-03 04:38:21.589409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.760 [2024-11-03 04:38:21.589434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:58.760 [2024-11-03 04:38:21.589444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.936 ms 00:17:58.760 [2024-11-03 04:38:21.589450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.589750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.761 [2024-11-03 04:38:21.589762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:58.761 [2024-11-03 04:38:21.589772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:17:58.761 [2024-11-03 04:38:21.589778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.624865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.624894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.761 [2024-11-03 04:38:21.624904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.624912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.624962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.624969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.761 [2024-11-03 04:38:21.624977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.624983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.625044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.625053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.761 [2024-11-03 04:38:21.625061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.625067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.625085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.625092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.761 [2024-11-03 04:38:21.625100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.625105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.688018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.688051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.761 [2024-11-03 04:38:21.688062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.688069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.739539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.739729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.761 [2024-11-03 04:38:21.739747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.739755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.739840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.739851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.761 [2024-11-03 04:38:21.739859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.739866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.739909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.739917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.761 [2024-11-03 04:38:21.739925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.739931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.740015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.740023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.761 [2024-11-03 04:38:21.740033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.740039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.740069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.740077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.761 [2024-11-03 04:38:21.740085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.740091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.740126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.740134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.761 [2024-11-03 04:38:21.740143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.740149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.740193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.761 [2024-11-03 04:38:21.740201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.761 [2024-11-03 04:38:21.740209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.761 [2024-11-03 04:38:21.740215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.761 [2024-11-03 04:38:21.740334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.608 ms, result 0 00:17:58.761 true 00:17:58.761 04:38:21 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74589 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74589 ']' 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74589 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74589 00:17:58.761 killing process with pid 74589 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74589' 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74589 00:17:58.761 04:38:21 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74589 00:18:05.347 04:38:27 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:08.646 262144+0 records in 00:18:08.646 262144+0 records out 00:18:08.646 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.86832 s, 278 MB/s 00:18:08.646 04:38:31 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:10.560 04:38:33 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.560 [2024-11-03 04:38:33.405675] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:18:10.560 [2024-11-03 04:38:33.405776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74808 ] 00:18:10.560 [2024-11-03 04:38:33.562652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.838 [2024-11-03 04:38:33.677434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.119 [2024-11-03 04:38:33.998369] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:11.119 [2024-11-03 04:38:33.998460] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:11.119 [2024-11-03 04:38:34.163259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.163332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:11.119 [2024-11-03 04:38:34.163356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:11.119 [2024-11-03 04:38:34.163367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.163437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.163449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.119 [2024-11-03 04:38:34.163462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:11.119 [2024-11-03 04:38:34.163471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.163494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:11.119 [2024-11-03 04:38:34.164298] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:11.119 [2024-11-03 04:38:34.164332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.164342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.119 [2024-11-03 04:38:34.164352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:18:11.119 [2024-11-03 04:38:34.164361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.166666] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:11.119 [2024-11-03 04:38:34.182031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.182261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:11.119 [2024-11-03 04:38:34.182286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.367 ms 00:18:11.119 [2024-11-03 04:38:34.182296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.182542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.182605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:11.119 [2024-11-03 04:38:34.182617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:11.119 [2024-11-03 04:38:34.182626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.194429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.194480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.119 [2024-11-03 04:38:34.194493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.717 ms 00:18:11.119 [2024-11-03 04:38:34.194502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.194620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.194631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.119 [2024-11-03 04:38:34.194640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:11.119 [2024-11-03 04:38:34.194651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.194713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.194724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:11.119 [2024-11-03 04:38:34.194734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:11.119 [2024-11-03 04:38:34.194744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.194768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.119 [2024-11-03 04:38:34.199324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.199368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.119 [2024-11-03 04:38:34.199380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.561 ms 00:18:11.119 [2024-11-03 04:38:34.199392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.199430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.199441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:11.119 [2024-11-03 04:38:34.199449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:11.119 [2024-11-03 04:38:34.199458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.199497] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:11.119 [2024-11-03 04:38:34.199524] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:11.119 [2024-11-03 04:38:34.199576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:11.119 [2024-11-03 04:38:34.199599] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:11.119 [2024-11-03 04:38:34.199711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:11.119 [2024-11-03 04:38:34.199724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:11.119 [2024-11-03 04:38:34.199737] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:11.119 [2024-11-03 04:38:34.199748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:11.119 [2024-11-03 04:38:34.199759] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:11.119 [2024-11-03 04:38:34.199769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:11.119 [2024-11-03 04:38:34.199777] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:11.119 [2024-11-03 04:38:34.199787] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:11.119 [2024-11-03 04:38:34.199795] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:11.119 [2024-11-03 04:38:34.199807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.199815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:11.119 [2024-11-03 04:38:34.199825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:11.119 [2024-11-03 04:38:34.199833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.199918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.119 [2024-11-03 04:38:34.199927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:11.119 [2024-11-03 04:38:34.199935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:11.119 [2024-11-03 04:38:34.199943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.119 [2024-11-03 04:38:34.200049] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:11.119 [2024-11-03 04:38:34.200064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:11.119 [2024-11-03 04:38:34.200074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:11.119 [2024-11-03 04:38:34.200101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:11.119 [2024-11-03 04:38:34.200124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.119 [2024-11-03 04:38:34.200139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:11.119 [2024-11-03 04:38:34.200147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:11.119 [2024-11-03 04:38:34.200154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.119 [2024-11-03 04:38:34.200162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:11.119 [2024-11-03 04:38:34.200175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:11.119 [2024-11-03 04:38:34.200190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:11.119 [2024-11-03 04:38:34.200207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:11.119 [2024-11-03 04:38:34.200230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:11.119 [2024-11-03 04:38:34.200253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:11.119 [2024-11-03 04:38:34.200276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:11.119 [2024-11-03 04:38:34.200297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.119 [2024-11-03 04:38:34.200313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:11.119 [2024-11-03 04:38:34.200321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:11.119 [2024-11-03 04:38:34.200328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.119 [2024-11-03 04:38:34.200336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:11.120 [2024-11-03 04:38:34.200343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:11.120 [2024-11-03 04:38:34.200351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.120 [2024-11-03 04:38:34.200357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:11.120 [2024-11-03 04:38:34.200364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:11.120 [2024-11-03 04:38:34.200370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.120 [2024-11-03 04:38:34.200377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:11.120 [2024-11-03 04:38:34.200385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:11.120 [2024-11-03 04:38:34.200391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.120 [2024-11-03 04:38:34.200398] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:11.120 [2024-11-03 04:38:34.200406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:11.120 [2024-11-03 04:38:34.200414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.120 [2024-11-03 04:38:34.200424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.120 [2024-11-03 04:38:34.200433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:11.120 [2024-11-03 04:38:34.200440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:11.120 [2024-11-03 04:38:34.200447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:11.120 [2024-11-03 04:38:34.200457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:11.120 [2024-11-03 04:38:34.200464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:11.120 [2024-11-03 04:38:34.200470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:11.120 [2024-11-03 04:38:34.200479] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:11.120 [2024-11-03 04:38:34.200491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.120 [2024-11-03 04:38:34.200502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:11.120 [2024-11-03 04:38:34.200509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:11.120 [2024-11-03 04:38:34.200519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:11.120 [2024-11-03 04:38:34.200527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:11.120 [2024-11-03 04:38:34.200538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:11.120 [2024-11-03 04:38:34.200545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:11.120 [2024-11-03 04:38:34.200555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:11.120 [2024-11-03 04:38:34.200579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:11.120 [2024-11-03 04:38:34.200586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:11.120 [2024-11-03 04:38:34.200592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:11.120 [2024-11-03 04:38:34.200599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:11.120 [2024-11-03 04:38:34.200606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:11.120 [2024-11-03 04:38:34.200614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:11.120 [2024-11-03 04:38:34.200622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:11.381 [2024-11-03 04:38:34.200630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:11.381 [2024-11-03 04:38:34.200640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.381 [2024-11-03 04:38:34.200651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:11.381 [2024-11-03 04:38:34.200660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:11.381 [2024-11-03 04:38:34.200669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:11.381 [2024-11-03 04:38:34.200676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:11.381 [2024-11-03 04:38:34.200685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.381 [2024-11-03 04:38:34.200694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:11.382 [2024-11-03 04:38:34.200703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:18:11.382 [2024-11-03 04:38:34.200713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.239080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.239139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.382 [2024-11-03 04:38:34.239154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.304 ms 00:18:11.382 [2024-11-03 04:38:34.239163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.239262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.239278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.382 [2024-11-03 04:38:34.239288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:11.382 [2024-11-03 04:38:34.239297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.289108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.289338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.382 [2024-11-03 04:38:34.289362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.747 ms 00:18:11.382 [2024-11-03 04:38:34.289372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.289427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.289438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.382 [2024-11-03 04:38:34.289448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:11.382 [2024-11-03 04:38:34.289463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.290234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.290279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.382 [2024-11-03 04:38:34.290291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:18:11.382 [2024-11-03 04:38:34.290300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.290477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.290488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.382 [2024-11-03 04:38:34.290497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:18:11.382 [2024-11-03 04:38:34.290507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.309004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.309060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.382 [2024-11-03 04:38:34.309073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.468 ms 00:18:11.382 [2024-11-03 04:38:34.309087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.324528] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:11.382 [2024-11-03 04:38:34.324795] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:11.382 [2024-11-03 04:38:34.324817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.324828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:11.382 [2024-11-03 04:38:34.324839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.600 ms 00:18:11.382 [2024-11-03 04:38:34.324848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.351446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.351499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:11.382 [2024-11-03 04:38:34.351519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.550 ms 00:18:11.382 [2024-11-03 04:38:34.351528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.364821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.364883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:11.382 [2024-11-03 04:38:34.364895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.218 ms 00:18:11.382 [2024-11-03 04:38:34.364903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.377803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.377847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:11.382 [2024-11-03 04:38:34.377860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.854 ms 00:18:11.382 [2024-11-03 04:38:34.377868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.378533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.378572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.382 [2024-11-03 04:38:34.378585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:18:11.382 [2024-11-03 04:38:34.378594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.382 [2024-11-03 04:38:34.452941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.382 [2024-11-03 04:38:34.453002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:11.382 [2024-11-03 04:38:34.453018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.325 ms 00:18:11.382 [2024-11-03 04:38:34.453028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.466095] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:11.644 [2024-11-03 04:38:34.470322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.470364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.644 [2024-11-03 04:38:34.470377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.228 ms 00:18:11.644 [2024-11-03 04:38:34.470387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.470479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.470491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:11.644 [2024-11-03 04:38:34.470502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:11.644 [2024-11-03 04:38:34.470512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.470614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.470631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.644 [2024-11-03 04:38:34.470642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:11.644 [2024-11-03 04:38:34.470651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.470675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.470685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.644 [2024-11-03 04:38:34.470695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:11.644 [2024-11-03 04:38:34.470704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.470747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:11.644 [2024-11-03 04:38:34.470759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.470769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:11.644 [2024-11-03 04:38:34.470782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:11.644 [2024-11-03 04:38:34.470791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.497452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.497499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.644 [2024-11-03 04:38:34.497514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.640 ms 00:18:11.644 [2024-11-03 04:38:34.497522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.497635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.644 [2024-11-03 04:38:34.497648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.644 [2024-11-03 04:38:34.497658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:11.644 [2024-11-03 04:38:34.497666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.644 [2024-11-03 04:38:34.499246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 335.433 ms, result 0 00:18:12.588  [2024-11-03T04:38:36.617Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-03T04:38:37.561Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-03T04:38:38.948Z] Copying: 36/1024 [MB] (15 MBps) [2024-11-03T04:38:39.521Z] Copying: 47456/1048576 [kB] (10240 kBps) [2024-11-03T04:38:40.908Z] Copying: 57/1024 [MB] (10 MBps) [2024-11-03T04:38:41.852Z] Copying: 68/1024 [MB] (11 MBps) [2024-11-03T04:38:42.795Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-03T04:38:43.741Z] Copying: 90/1024 [MB] (11 MBps) [2024-11-03T04:38:44.684Z] Copying: 101/1024 [MB] (10 MBps) [2024-11-03T04:38:45.641Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-03T04:38:46.583Z] Copying: 123/1024 [MB] (11 MBps) [2024-11-03T04:38:47.526Z] Copying: 134/1024 [MB] (11 MBps) [2024-11-03T04:38:48.911Z] Copying: 146/1024 [MB] (11 MBps) [2024-11-03T04:38:49.855Z] Copying: 156/1024 [MB] (10 MBps) [2024-11-03T04:38:50.797Z] Copying: 168/1024 [MB] (11 MBps) [2024-11-03T04:38:51.738Z] Copying: 179/1024 [MB] (11 MBps) [2024-11-03T04:38:52.682Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-03T04:38:53.625Z] Copying: 202/1024 [MB] (11 MBps) [2024-11-03T04:38:54.570Z] Copying: 214/1024 [MB] (11 MBps) [2024-11-03T04:38:55.537Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-03T04:38:56.925Z] Copying: 248/1024 [MB] (23 MBps) [2024-11-03T04:38:57.870Z] Copying: 262/1024 [MB] (13 MBps) [2024-11-03T04:38:58.814Z] Copying: 280/1024 [MB] (18 MBps) [2024-11-03T04:38:59.758Z] Copying: 293/1024 [MB] (13 MBps) [2024-11-03T04:39:00.700Z] Copying: 307/1024 [MB] (14 MBps) [2024-11-03T04:39:01.639Z] Copying: 320/1024 [MB] (12 MBps) [2024-11-03T04:39:02.583Z] Copying: 336/1024 [MB] (15 MBps) [2024-11-03T04:39:03.525Z] Copying: 355/1024 [MB] (19 MBps) [2024-11-03T04:39:04.910Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-03T04:39:05.851Z] Copying: 383/1024 [MB] (16 MBps) [2024-11-03T04:39:06.792Z] Copying: 400/1024 [MB] (17 MBps) [2024-11-03T04:39:07.729Z] Copying: 416/1024 [MB] (15 MBps) [2024-11-03T04:39:08.665Z] Copying: 438/1024 [MB] (22 MBps) [2024-11-03T04:39:09.607Z] Copying: 491/1024 [MB] (53 MBps) [2024-11-03T04:39:10.547Z] Copying: 512/1024 [MB] (20 MBps) [2024-11-03T04:39:11.934Z] Copying: 526/1024 [MB] (14 MBps) [2024-11-03T04:39:12.877Z] Copying: 545/1024 [MB] (19 MBps) [2024-11-03T04:39:13.820Z] Copying: 562/1024 [MB] (17 MBps) [2024-11-03T04:39:14.765Z] Copying: 578/1024 [MB] (15 MBps) [2024-11-03T04:39:15.758Z] Copying: 597/1024 [MB] (18 MBps) [2024-11-03T04:39:16.703Z] Copying: 610/1024 [MB] (13 MBps) [2024-11-03T04:39:17.648Z] Copying: 625/1024 [MB] (15 MBps) [2024-11-03T04:39:18.594Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-03T04:39:19.538Z] Copying: 646/1024 [MB] (10 MBps) [2024-11-03T04:39:20.939Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-03T04:39:21.513Z] Copying: 675/1024 [MB] (18 MBps) [2024-11-03T04:39:22.898Z] Copying: 695/1024 [MB] (20 MBps) [2024-11-03T04:39:23.839Z] Copying: 718/1024 [MB] (22 MBps) [2024-11-03T04:39:24.783Z] Copying: 731/1024 [MB] (12 MBps) [2024-11-03T04:39:25.724Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-03T04:39:26.667Z] Copying: 759/1024 [MB] (17 MBps) [2024-11-03T04:39:27.611Z] Copying: 779/1024 [MB] (19 MBps) [2024-11-03T04:39:28.556Z] Copying: 798/1024 [MB] (19 MBps) [2024-11-03T04:39:29.943Z] Copying: 816/1024 [MB] (17 MBps) [2024-11-03T04:39:30.516Z] Copying: 831/1024 [MB] (15 MBps) [2024-11-03T04:39:31.904Z] Copying: 850/1024 [MB] (19 MBps) [2024-11-03T04:39:32.849Z] Copying: 863/1024 [MB] (12 MBps) [2024-11-03T04:39:33.796Z] Copying: 878/1024 [MB] (14 MBps) [2024-11-03T04:39:34.742Z] Copying: 893/1024 [MB] (15 MBps) [2024-11-03T04:39:35.717Z] Copying: 907/1024 [MB] (13 MBps) [2024-11-03T04:39:36.665Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-03T04:39:37.610Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-03T04:39:38.555Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-03T04:39:39.941Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-03T04:39:40.513Z] Copying: 969/1024 [MB] (21 MBps) [2024-11-03T04:39:41.899Z] Copying: 984/1024 [MB] (15 MBps) [2024-11-03T04:39:42.843Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-03T04:39:43.786Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-03T04:39:44.048Z] Copying: 1016/1024 [MB] (10 MBps) [2024-11-03T04:39:44.048Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 04:39:43.902767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.902826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.964 [2024-11-03 04:39:43.902842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.964 [2024-11-03 04:39:43.902851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.902873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.964 [2024-11-03 04:39:43.905905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.905947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.964 [2024-11-03 04:39:43.905959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:19:20.964 [2024-11-03 04:39:43.905967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.908529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.908587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.964 [2024-11-03 04:39:43.908599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:19:20.964 [2024-11-03 04:39:43.908608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.926827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.926874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.964 [2024-11-03 04:39:43.926887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.202 ms 00:19:20.964 [2024-11-03 04:39:43.926895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.933067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.933118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.964 [2024-11-03 04:39:43.933130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:19:20.964 [2024-11-03 04:39:43.933138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.960241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.960298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.964 [2024-11-03 04:39:43.960312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.042 ms 00:19:20.964 [2024-11-03 04:39:43.960321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.976609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.976662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.964 [2024-11-03 04:39:43.976674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.239 ms 00:19:20.964 [2024-11-03 04:39:43.976683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:43.976850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:43.976864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.964 [2024-11-03 04:39:43.976875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:20.964 [2024-11-03 04:39:43.976891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:44.002610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:44.002658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.964 [2024-11-03 04:39:44.002670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.703 ms 00:19:20.964 [2024-11-03 04:39:44.002677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.964 [2024-11-03 04:39:44.028144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.964 [2024-11-03 04:39:44.028196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.964 [2024-11-03 04:39:44.028221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.419 ms 00:19:20.964 [2024-11-03 04:39:44.028228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.227 [2024-11-03 04:39:44.053148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.227 [2024-11-03 04:39:44.053194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.227 [2024-11-03 04:39:44.053205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.873 ms 00:19:21.227 [2024-11-03 04:39:44.053213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.227 [2024-11-03 04:39:44.077823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.227 [2024-11-03 04:39:44.077868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.227 [2024-11-03 04:39:44.077879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.535 ms 00:19:21.227 [2024-11-03 04:39:44.077886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.227 [2024-11-03 04:39:44.077931] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.227 [2024-11-03 04:39:44.077949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.077959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.077968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.077975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.077983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.077992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.227 [2024-11-03 04:39:44.078267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.228 [2024-11-03 04:39:44.078726] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.228 [2024-11-03 04:39:44.078742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73eb68bc-cabf-4fe5-8508-e3167e0524d2 00:19:21.228 [2024-11-03 04:39:44.078750] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.228 [2024-11-03 04:39:44.078762] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.228 [2024-11-03 04:39:44.078769] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.228 [2024-11-03 04:39:44.078777] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.228 [2024-11-03 04:39:44.078784] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.228 [2024-11-03 04:39:44.078792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.228 [2024-11-03 04:39:44.078800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.228 [2024-11-03 04:39:44.078815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.228 [2024-11-03 04:39:44.078821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.228 [2024-11-03 04:39:44.078828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.228 [2024-11-03 04:39:44.078836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.228 [2024-11-03 04:39:44.078846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:19:21.228 [2024-11-03 04:39:44.078854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.092671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.228 [2024-11-03 04:39:44.092714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.228 [2024-11-03 04:39:44.092741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.798 ms 00:19:21.228 [2024-11-03 04:39:44.092749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.093152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.228 [2024-11-03 04:39:44.093172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.228 [2024-11-03 04:39:44.093181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:21.228 [2024-11-03 04:39:44.093189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.130438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.228 [2024-11-03 04:39:44.130495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.228 [2024-11-03 04:39:44.130508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.228 [2024-11-03 04:39:44.130518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.130606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.228 [2024-11-03 04:39:44.130618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.228 [2024-11-03 04:39:44.130629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.228 [2024-11-03 04:39:44.130638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.130735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.228 [2024-11-03 04:39:44.130746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.228 [2024-11-03 04:39:44.130756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.228 [2024-11-03 04:39:44.130765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.130782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.228 [2024-11-03 04:39:44.130790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.228 [2024-11-03 04:39:44.130798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.228 [2024-11-03 04:39:44.130805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.228 [2024-11-03 04:39:44.214517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.228 [2024-11-03 04:39:44.214598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.228 [2024-11-03 04:39:44.214613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.214622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.284605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.284669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.229 [2024-11-03 04:39:44.284682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.284691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.284759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.284777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.229 [2024-11-03 04:39:44.284786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.284795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.284854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.284865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.229 [2024-11-03 04:39:44.284874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.284882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.284983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.284994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.229 [2024-11-03 04:39:44.285007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.285015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.285047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.285057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.229 [2024-11-03 04:39:44.285066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.285073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.285114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.285123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.229 [2024-11-03 04:39:44.285135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.285143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.285190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.229 [2024-11-03 04:39:44.285201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.229 [2024-11-03 04:39:44.285209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.229 [2024-11-03 04:39:44.285217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.229 [2024-11-03 04:39:44.285351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.546 ms, result 0 00:19:22.172 00:19:22.172 00:19:22.172 04:39:45 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:22.172 [2024-11-03 04:39:45.163090] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:19:22.172 [2024-11-03 04:39:45.163259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75547 ] 00:19:22.433 [2024-11-03 04:39:45.324787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.433 [2024-11-03 04:39:45.439752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.694 [2024-11-03 04:39:45.726315] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.694 [2024-11-03 04:39:45.726392] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.957 [2024-11-03 04:39:45.886760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.886826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.957 [2024-11-03 04:39:45.886844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:22.957 [2024-11-03 04:39:45.886853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.886909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.886920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.957 [2024-11-03 04:39:45.886932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:22.957 [2024-11-03 04:39:45.886940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.886962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.957 [2024-11-03 04:39:45.887708] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.957 [2024-11-03 04:39:45.887740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.887750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.957 [2024-11-03 04:39:45.887759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:19:22.957 [2024-11-03 04:39:45.887767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.889462] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.957 [2024-11-03 04:39:45.903849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.903903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.957 [2024-11-03 04:39:45.903916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.389 ms 00:19:22.957 [2024-11-03 04:39:45.903924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.904001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.904015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.957 [2024-11-03 04:39:45.904024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:22.957 [2024-11-03 04:39:45.904032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.911976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.912019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.957 [2024-11-03 04:39:45.912029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.867 ms 00:19:22.957 [2024-11-03 04:39:45.912038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.912122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.912132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.957 [2024-11-03 04:39:45.912141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:22.957 [2024-11-03 04:39:45.912149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.912192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.912204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.957 [2024-11-03 04:39:45.912212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:22.957 [2024-11-03 04:39:45.912220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.912244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.957 [2024-11-03 04:39:45.916299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.916340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.957 [2024-11-03 04:39:45.916351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:19:22.957 [2024-11-03 04:39:45.916363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.916399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.916408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.957 [2024-11-03 04:39:45.916418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:22.957 [2024-11-03 04:39:45.916425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.916476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.957 [2024-11-03 04:39:45.916501] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.957 [2024-11-03 04:39:45.916538] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.957 [2024-11-03 04:39:45.916574] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.957 [2024-11-03 04:39:45.916681] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.957 [2024-11-03 04:39:45.916695] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.957 [2024-11-03 04:39:45.916706] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.957 [2024-11-03 04:39:45.916741] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.957 [2024-11-03 04:39:45.916751] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.957 [2024-11-03 04:39:45.916761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.957 [2024-11-03 04:39:45.916769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.957 [2024-11-03 04:39:45.916777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.957 [2024-11-03 04:39:45.916785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.957 [2024-11-03 04:39:45.916797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.916804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.957 [2024-11-03 04:39:45.916812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:19:22.957 [2024-11-03 04:39:45.916819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.916903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.957 [2024-11-03 04:39:45.916914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.957 [2024-11-03 04:39:45.916921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:22.957 [2024-11-03 04:39:45.916928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.957 [2024-11-03 04:39:45.917033] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.957 [2024-11-03 04:39:45.917048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.957 [2024-11-03 04:39:45.917058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.957 [2024-11-03 04:39:45.917082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.957 [2024-11-03 04:39:45.917106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.957 [2024-11-03 04:39:45.917121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.957 [2024-11-03 04:39:45.917131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.957 [2024-11-03 04:39:45.917139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.957 [2024-11-03 04:39:45.917147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.957 [2024-11-03 04:39:45.917155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:22.957 [2024-11-03 04:39:45.917170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.957 [2024-11-03 04:39:45.917185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.957 [2024-11-03 04:39:45.917208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.957 [2024-11-03 04:39:45.917229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.957 [2024-11-03 04:39:45.917251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.957 [2024-11-03 04:39:45.917273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.957 [2024-11-03 04:39:45.917288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.957 [2024-11-03 04:39:45.917294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:22.957 [2024-11-03 04:39:45.917300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.957 [2024-11-03 04:39:45.917307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.957 [2024-11-03 04:39:45.917314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:22.957 [2024-11-03 04:39:45.917321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.957 [2024-11-03 04:39:45.917328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.957 [2024-11-03 04:39:45.917335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:22.958 [2024-11-03 04:39:45.917341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.958 [2024-11-03 04:39:45.917347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.958 [2024-11-03 04:39:45.917354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:22.958 [2024-11-03 04:39:45.917361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.958 [2024-11-03 04:39:45.917369] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.958 [2024-11-03 04:39:45.917377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.958 [2024-11-03 04:39:45.917385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.958 [2024-11-03 04:39:45.917394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.958 [2024-11-03 04:39:45.917402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.958 [2024-11-03 04:39:45.917410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.958 [2024-11-03 04:39:45.917417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.958 [2024-11-03 04:39:45.917424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.958 [2024-11-03 04:39:45.917431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.958 [2024-11-03 04:39:45.917439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.958 [2024-11-03 04:39:45.917447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.958 [2024-11-03 04:39:45.917458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.958 [2024-11-03 04:39:45.917475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:22.958 [2024-11-03 04:39:45.917483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:22.958 [2024-11-03 04:39:45.917490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:22.958 [2024-11-03 04:39:45.917499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:22.958 [2024-11-03 04:39:45.917507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:22.958 [2024-11-03 04:39:45.917515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:22.958 [2024-11-03 04:39:45.917522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:22.958 [2024-11-03 04:39:45.917530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:22.958 [2024-11-03 04:39:45.917537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:22.958 [2024-11-03 04:39:45.917593] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.958 [2024-11-03 04:39:45.917602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.958 [2024-11-03 04:39:45.917625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.958 [2024-11-03 04:39:45.917635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.958 [2024-11-03 04:39:45.917642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.958 [2024-11-03 04:39:45.917662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.917671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.958 [2024-11-03 04:39:45.917679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:19:22.958 [2024-11-03 04:39:45.917687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:45.949334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.949385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.958 [2024-11-03 04:39:45.949397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.600 ms 00:19:22.958 [2024-11-03 04:39:45.949406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:45.949493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.949505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.958 [2024-11-03 04:39:45.949514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:22.958 [2024-11-03 04:39:45.949523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:45.998073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.998132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.958 [2024-11-03 04:39:45.998145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.478 ms 00:19:22.958 [2024-11-03 04:39:45.998154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:45.998203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.998213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.958 [2024-11-03 04:39:45.998223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.958 [2024-11-03 04:39:45.998235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:45.998849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.998894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.958 [2024-11-03 04:39:45.998906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:19:22.958 [2024-11-03 04:39:45.998914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:45.999075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:45.999087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.958 [2024-11-03 04:39:45.999096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:22.958 [2024-11-03 04:39:45.999104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:46.014906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:46.014952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.958 [2024-11-03 04:39:46.014964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.775 ms 00:19:22.958 [2024-11-03 04:39:46.014975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.958 [2024-11-03 04:39:46.029202] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:22.958 [2024-11-03 04:39:46.029251] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:22.958 [2024-11-03 04:39:46.029265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.958 [2024-11-03 04:39:46.029273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:22.958 [2024-11-03 04:39:46.029283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.177 ms 00:19:22.958 [2024-11-03 04:39:46.029291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.055664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.055721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.220 [2024-11-03 04:39:46.055733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.318 ms 00:19:23.220 [2024-11-03 04:39:46.055742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.068726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.068774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.220 [2024-11-03 04:39:46.068785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:19:23.220 [2024-11-03 04:39:46.068793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.081584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.081629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.220 [2024-11-03 04:39:46.081641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.743 ms 00:19:23.220 [2024-11-03 04:39:46.081648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.082292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.082324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.220 [2024-11-03 04:39:46.082335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:19:23.220 [2024-11-03 04:39:46.082343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.147243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.147299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.220 [2024-11-03 04:39:46.147315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.877 ms 00:19:23.220 [2024-11-03 04:39:46.147330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.158267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:23.220 [2024-11-03 04:39:46.161257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.161299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.220 [2024-11-03 04:39:46.161311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.871 ms 00:19:23.220 [2024-11-03 04:39:46.161320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.161407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.161419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.220 [2024-11-03 04:39:46.161430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:23.220 [2024-11-03 04:39:46.161438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.161514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.161526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.220 [2024-11-03 04:39:46.161535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:23.220 [2024-11-03 04:39:46.161544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.161583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.161594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.220 [2024-11-03 04:39:46.161603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.220 [2024-11-03 04:39:46.161612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.220 [2024-11-03 04:39:46.161647] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.220 [2024-11-03 04:39:46.161661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.220 [2024-11-03 04:39:46.161670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.221 [2024-11-03 04:39:46.161679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:23.221 [2024-11-03 04:39:46.161687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.221 [2024-11-03 04:39:46.188218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.221 [2024-11-03 04:39:46.188271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.221 [2024-11-03 04:39:46.188284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.512 ms 00:19:23.221 [2024-11-03 04:39:46.188293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.221 [2024-11-03 04:39:46.188390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.221 [2024-11-03 04:39:46.188402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.221 [2024-11-03 04:39:46.188411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:23.221 [2024-11-03 04:39:46.188420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.221 [2024-11-03 04:39:46.190286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.039 ms, result 0 00:19:24.606  [2024-11-03T04:39:48.637Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-03T04:39:49.580Z] Copying: 33/1024 [MB] (17 MBps) [2024-11-03T04:39:50.528Z] Copying: 47/1024 [MB] (13 MBps) [2024-11-03T04:39:51.473Z] Copying: 63/1024 [MB] (16 MBps) [2024-11-03T04:39:52.416Z] Copying: 83/1024 [MB] (20 MBps) [2024-11-03T04:39:53.806Z] Copying: 96/1024 [MB] (12 MBps) [2024-11-03T04:39:54.746Z] Copying: 109/1024 [MB] (13 MBps) [2024-11-03T04:39:55.738Z] Copying: 124/1024 [MB] (15 MBps) [2024-11-03T04:39:56.680Z] Copying: 136/1024 [MB] (12 MBps) [2024-11-03T04:39:57.622Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-03T04:39:58.562Z] Copying: 158/1024 [MB] (10 MBps) [2024-11-03T04:39:59.502Z] Copying: 169/1024 [MB] (11 MBps) [2024-11-03T04:40:00.447Z] Copying: 186/1024 [MB] (16 MBps) [2024-11-03T04:40:01.388Z] Copying: 200/1024 [MB] (14 MBps) [2024-11-03T04:40:02.773Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-03T04:40:03.718Z] Copying: 225/1024 [MB] (14 MBps) [2024-11-03T04:40:04.661Z] Copying: 235/1024 [MB] (10 MBps) [2024-11-03T04:40:05.606Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-03T04:40:06.549Z] Copying: 268/1024 [MB] (20 MBps) [2024-11-03T04:40:07.493Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-03T04:40:08.438Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-03T04:40:09.382Z] Copying: 305/1024 [MB] (15 MBps) [2024-11-03T04:40:10.767Z] Copying: 323/1024 [MB] (18 MBps) [2024-11-03T04:40:11.713Z] Copying: 335/1024 [MB] (11 MBps) [2024-11-03T04:40:12.656Z] Copying: 349/1024 [MB] (14 MBps) [2024-11-03T04:40:13.600Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-03T04:40:14.545Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-03T04:40:15.500Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-03T04:40:16.478Z] Copying: 400/1024 [MB] (18 MBps) [2024-11-03T04:40:17.423Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-03T04:40:18.810Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-03T04:40:19.384Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-03T04:40:20.772Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-03T04:40:21.719Z] Copying: 456/1024 [MB] (12 MBps) [2024-11-03T04:40:22.664Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-03T04:40:23.623Z] Copying: 481/1024 [MB] (14 MBps) [2024-11-03T04:40:24.569Z] Copying: 498/1024 [MB] (16 MBps) [2024-11-03T04:40:25.511Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-03T04:40:26.457Z] Copying: 534/1024 [MB] (19 MBps) [2024-11-03T04:40:27.402Z] Copying: 547/1024 [MB] (12 MBps) [2024-11-03T04:40:28.783Z] Copying: 561/1024 [MB] (13 MBps) [2024-11-03T04:40:29.727Z] Copying: 582/1024 [MB] (21 MBps) [2024-11-03T04:40:30.671Z] Copying: 602/1024 [MB] (20 MBps) [2024-11-03T04:40:31.615Z] Copying: 623/1024 [MB] (20 MBps) [2024-11-03T04:40:32.557Z] Copying: 638/1024 [MB] (15 MBps) [2024-11-03T04:40:33.501Z] Copying: 655/1024 [MB] (17 MBps) [2024-11-03T04:40:34.443Z] Copying: 670/1024 [MB] (14 MBps) [2024-11-03T04:40:35.388Z] Copying: 684/1024 [MB] (13 MBps) [2024-11-03T04:40:36.813Z] Copying: 695/1024 [MB] (11 MBps) [2024-11-03T04:40:37.385Z] Copying: 716/1024 [MB] (20 MBps) [2024-11-03T04:40:38.774Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-03T04:40:39.720Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-03T04:40:40.664Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-03T04:40:41.610Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-03T04:40:42.551Z] Copying: 770/1024 [MB] (10 MBps) [2024-11-03T04:40:43.492Z] Copying: 783/1024 [MB] (13 MBps) [2024-11-03T04:40:44.434Z] Copying: 796/1024 [MB] (12 MBps) [2024-11-03T04:40:45.818Z] Copying: 816/1024 [MB] (20 MBps) [2024-11-03T04:40:46.389Z] Copying: 836/1024 [MB] (19 MBps) [2024-11-03T04:40:47.769Z] Copying: 858/1024 [MB] (22 MBps) [2024-11-03T04:40:48.707Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-03T04:40:49.649Z] Copying: 887/1024 [MB] (17 MBps) [2024-11-03T04:40:50.592Z] Copying: 916/1024 [MB] (29 MBps) [2024-11-03T04:40:51.532Z] Copying: 933/1024 [MB] (17 MBps) [2024-11-03T04:40:52.474Z] Copying: 957/1024 [MB] (23 MBps) [2024-11-03T04:40:53.416Z] Copying: 980/1024 [MB] (23 MBps) [2024-11-03T04:40:54.802Z] Copying: 997/1024 [MB] (16 MBps) [2024-11-03T04:40:55.065Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-03T04:40:55.327Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 04:40:55.141267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.141382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.243 [2024-11-03 04:40:55.141407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:32.243 [2024-11-03 04:40:55.141424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.141463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.243 [2024-11-03 04:40:55.146727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.146777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.243 [2024-11-03 04:40:55.146790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.237 ms 00:20:32.243 [2024-11-03 04:40:55.146807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.147047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.147061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.243 [2024-11-03 04:40:55.147070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:32.243 [2024-11-03 04:40:55.147079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.150549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.150583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.243 [2024-11-03 04:40:55.150592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:20:32.243 [2024-11-03 04:40:55.150600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.157683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.157730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.243 [2024-11-03 04:40:55.157742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.060 ms 00:20:32.243 [2024-11-03 04:40:55.157750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.185073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.185139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.243 [2024-11-03 04:40:55.185152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.251 ms 00:20:32.243 [2024-11-03 04:40:55.185161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.201101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.201150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.243 [2024-11-03 04:40:55.201163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.888 ms 00:20:32.243 [2024-11-03 04:40:55.201172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.201319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.201333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.243 [2024-11-03 04:40:55.201351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:32.243 [2024-11-03 04:40:55.201360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.227442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.227488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.243 [2024-11-03 04:40:55.227499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.066 ms 00:20:32.243 [2024-11-03 04:40:55.227508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.253063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.253121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.243 [2024-11-03 04:40:55.253132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.507 ms 00:20:32.243 [2024-11-03 04:40:55.253140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.277947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.277992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.243 [2024-11-03 04:40:55.278003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.759 ms 00:20:32.243 [2024-11-03 04:40:55.278011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.302938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.243 [2024-11-03 04:40:55.302984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.243 [2024-11-03 04:40:55.302996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.851 ms 00:20:32.243 [2024-11-03 04:40:55.303004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.243 [2024-11-03 04:40:55.303050] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.243 [2024-11-03 04:40:55.303066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.243 [2024-11-03 04:40:55.303246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.244 [2024-11-03 04:40:55.303876] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.244 [2024-11-03 04:40:55.303883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73eb68bc-cabf-4fe5-8508-e3167e0524d2 00:20:32.244 [2024-11-03 04:40:55.303895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.244 [2024-11-03 04:40:55.303903] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.244 [2024-11-03 04:40:55.303921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.244 [2024-11-03 04:40:55.303930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.244 [2024-11-03 04:40:55.303938] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.244 [2024-11-03 04:40:55.303946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.244 [2024-11-03 04:40:55.303967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.244 [2024-11-03 04:40:55.303975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.244 [2024-11-03 04:40:55.303982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.244 [2024-11-03 04:40:55.303990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.244 [2024-11-03 04:40:55.303998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.244 [2024-11-03 04:40:55.304008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:20:32.244 [2024-11-03 04:40:55.304015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.245 [2024-11-03 04:40:55.317857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.245 [2024-11-03 04:40:55.317901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.245 [2024-11-03 04:40:55.317912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.822 ms 00:20:32.245 [2024-11-03 04:40:55.317922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.245 [2024-11-03 04:40:55.318327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.245 [2024-11-03 04:40:55.318348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.245 [2024-11-03 04:40:55.318358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:32.245 [2024-11-03 04:40:55.318366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.355163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.355215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.506 [2024-11-03 04:40:55.355227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.355237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.355308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.355318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.506 [2024-11-03 04:40:55.355328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.355337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.355434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.355447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.506 [2024-11-03 04:40:55.355455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.355463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.355480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.355489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.506 [2024-11-03 04:40:55.355496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.355505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.438823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.438876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.506 [2024-11-03 04:40:55.438890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.438899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.506 [2024-11-03 04:40:55.508325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.506 [2024-11-03 04:40:55.508421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.506 [2024-11-03 04:40:55.508506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.506 [2024-11-03 04:40:55.508658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.506 [2024-11-03 04:40:55.508735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.506 [2024-11-03 04:40:55.508814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.508868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.506 [2024-11-03 04:40:55.508930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.506 [2024-11-03 04:40:55.508939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.506 [2024-11-03 04:40:55.508948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.506 [2024-11-03 04:40:55.509084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.795 ms, result 0 00:20:33.493 00:20:33.493 00:20:33.493 04:40:56 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.408 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:35.408 04:40:58 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:35.408 [2024-11-03 04:40:58.423972] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:20:35.408 [2024-11-03 04:40:58.424108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76302 ] 00:20:35.670 [2024-11-03 04:40:58.584026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.670 [2024-11-03 04:40:58.696376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.931 [2024-11-03 04:40:58.979755] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.931 [2024-11-03 04:40:58.979809] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.194 [2024-11-03 04:40:59.139433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.139499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.194 [2024-11-03 04:40:59.139517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:36.194 [2024-11-03 04:40:59.139526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.139596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.139608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.194 [2024-11-03 04:40:59.139620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:36.194 [2024-11-03 04:40:59.139629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.139652] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.194 [2024-11-03 04:40:59.140821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.194 [2024-11-03 04:40:59.140884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.140895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.194 [2024-11-03 04:40:59.140906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:20:36.194 [2024-11-03 04:40:59.140915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.142671] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.194 [2024-11-03 04:40:59.157155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.157206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.194 [2024-11-03 04:40:59.157220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.485 ms 00:20:36.194 [2024-11-03 04:40:59.157229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.157306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.157319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.194 [2024-11-03 04:40:59.157328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:36.194 [2024-11-03 04:40:59.157336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.165483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.165525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.194 [2024-11-03 04:40:59.165536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.069 ms 00:20:36.194 [2024-11-03 04:40:59.165544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.165651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.165661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.194 [2024-11-03 04:40:59.165670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:36.194 [2024-11-03 04:40:59.165678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.165723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.165733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.194 [2024-11-03 04:40:59.165741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.194 [2024-11-03 04:40:59.165749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.165774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.194 [2024-11-03 04:40:59.169769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.169808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.194 [2024-11-03 04:40:59.169819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:20:36.194 [2024-11-03 04:40:59.169831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.169865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.169874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.194 [2024-11-03 04:40:59.169882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:36.194 [2024-11-03 04:40:59.169891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.169942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.194 [2024-11-03 04:40:59.169964] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.194 [2024-11-03 04:40:59.170002] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.194 [2024-11-03 04:40:59.170022] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.194 [2024-11-03 04:40:59.170130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.194 [2024-11-03 04:40:59.170142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.194 [2024-11-03 04:40:59.170154] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.194 [2024-11-03 04:40:59.170165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.194 [2024-11-03 04:40:59.170174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.194 [2024-11-03 04:40:59.170183] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:36.194 [2024-11-03 04:40:59.170191] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.194 [2024-11-03 04:40:59.170200] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.194 [2024-11-03 04:40:59.170208] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.194 [2024-11-03 04:40:59.170219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.170228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.194 [2024-11-03 04:40:59.170237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:20:36.194 [2024-11-03 04:40:59.170245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.170328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.194 [2024-11-03 04:40:59.170337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.194 [2024-11-03 04:40:59.170345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:36.194 [2024-11-03 04:40:59.170352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.194 [2024-11-03 04:40:59.170457] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.194 [2024-11-03 04:40:59.170479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.194 [2024-11-03 04:40:59.170488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.194 [2024-11-03 04:40:59.170497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.194 [2024-11-03 04:40:59.170505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.194 [2024-11-03 04:40:59.170512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.194 [2024-11-03 04:40:59.170520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:36.194 [2024-11-03 04:40:59.170526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.194 [2024-11-03 04:40:59.170534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:36.194 [2024-11-03 04:40:59.170540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.194 [2024-11-03 04:40:59.170547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.194 [2024-11-03 04:40:59.170554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:36.194 [2024-11-03 04:40:59.170575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.194 [2024-11-03 04:40:59.170583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.194 [2024-11-03 04:40:59.170592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:36.194 [2024-11-03 04:40:59.170606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.194 [2024-11-03 04:40:59.170613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.194 [2024-11-03 04:40:59.170620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:36.194 [2024-11-03 04:40:59.170627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.194 [2024-11-03 04:40:59.170634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.195 [2024-11-03 04:40:59.170641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.195 [2024-11-03 04:40:59.170656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.195 [2024-11-03 04:40:59.170663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.195 [2024-11-03 04:40:59.170677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.195 [2024-11-03 04:40:59.170684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.195 [2024-11-03 04:40:59.170700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.195 [2024-11-03 04:40:59.170707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.195 [2024-11-03 04:40:59.170720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.195 [2024-11-03 04:40:59.170726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.195 [2024-11-03 04:40:59.170739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.195 [2024-11-03 04:40:59.170746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:36.195 [2024-11-03 04:40:59.170753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.195 [2024-11-03 04:40:59.170760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.195 [2024-11-03 04:40:59.170767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:36.195 [2024-11-03 04:40:59.170773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.195 [2024-11-03 04:40:59.170786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:36.195 [2024-11-03 04:40:59.170792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.195 [2024-11-03 04:40:59.170806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.195 [2024-11-03 04:40:59.170818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.195 [2024-11-03 04:40:59.170827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.195 [2024-11-03 04:40:59.170834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.195 [2024-11-03 04:40:59.170841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.195 [2024-11-03 04:40:59.170848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.195 [2024-11-03 04:40:59.170856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.195 [2024-11-03 04:40:59.170863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.195 [2024-11-03 04:40:59.170869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.195 [2024-11-03 04:40:59.170878] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.195 [2024-11-03 04:40:59.170887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.170896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:36.195 [2024-11-03 04:40:59.170903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:36.195 [2024-11-03 04:40:59.170910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:36.195 [2024-11-03 04:40:59.170919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:36.195 [2024-11-03 04:40:59.170926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:36.195 [2024-11-03 04:40:59.170933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:36.195 [2024-11-03 04:40:59.170940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:36.195 [2024-11-03 04:40:59.170946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:36.195 [2024-11-03 04:40:59.170953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:36.195 [2024-11-03 04:40:59.170960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.170968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.170976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.170982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.170990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:36.195 [2024-11-03 04:40:59.170997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.195 [2024-11-03 04:40:59.171006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.171017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.195 [2024-11-03 04:40:59.171025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.195 [2024-11-03 04:40:59.171032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.195 [2024-11-03 04:40:59.171039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.195 [2024-11-03 04:40:59.171046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.171054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.195 [2024-11-03 04:40:59.171062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:20:36.195 [2024-11-03 04:40:59.171070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.202905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.202952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.195 [2024-11-03 04:40:59.202963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.789 ms 00:20:36.195 [2024-11-03 04:40:59.202971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.203062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.203076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.195 [2024-11-03 04:40:59.203085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:36.195 [2024-11-03 04:40:59.203093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.247319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.247374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.195 [2024-11-03 04:40:59.247387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.166 ms 00:20:36.195 [2024-11-03 04:40:59.247396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.247445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.247456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.195 [2024-11-03 04:40:59.247466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.195 [2024-11-03 04:40:59.247478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.248103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.248146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.195 [2024-11-03 04:40:59.248157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:20:36.195 [2024-11-03 04:40:59.248165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.248325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.248335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.195 [2024-11-03 04:40:59.248344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:36.195 [2024-11-03 04:40:59.248352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.195 [2024-11-03 04:40:59.264018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.195 [2024-11-03 04:40:59.264063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.195 [2024-11-03 04:40:59.264075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.639 ms 00:20:36.195 [2024-11-03 04:40:59.264086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.278596] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:36.457 [2024-11-03 04:40:59.278643] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.457 [2024-11-03 04:40:59.278657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.278666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.457 [2024-11-03 04:40:59.278676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.462 ms 00:20:36.457 [2024-11-03 04:40:59.278683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.304703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.304761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.457 [2024-11-03 04:40:59.304773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.967 ms 00:20:36.457 [2024-11-03 04:40:59.304781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.317534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.317585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.457 [2024-11-03 04:40:59.317597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.692 ms 00:20:36.457 [2024-11-03 04:40:59.317605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.330265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.330309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.457 [2024-11-03 04:40:59.330320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:20:36.457 [2024-11-03 04:40:59.330327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.330993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.331033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.457 [2024-11-03 04:40:59.331043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:20:36.457 [2024-11-03 04:40:59.331051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.395211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.395279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:36.457 [2024-11-03 04:40:59.395295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.135 ms 00:20:36.457 [2024-11-03 04:40:59.395311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.406358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:36.457 [2024-11-03 04:40:59.409219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.409265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.457 [2024-11-03 04:40:59.409277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.851 ms 00:20:36.457 [2024-11-03 04:40:59.409286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.409375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.409389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:36.457 [2024-11-03 04:40:59.409398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:36.457 [2024-11-03 04:40:59.409407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.409483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.409495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.457 [2024-11-03 04:40:59.409503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:36.457 [2024-11-03 04:40:59.409512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.409533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.409542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.457 [2024-11-03 04:40:59.409551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:36.457 [2024-11-03 04:40:59.409577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.409613] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:36.457 [2024-11-03 04:40:59.409627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.409636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:36.457 [2024-11-03 04:40:59.409645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:36.457 [2024-11-03 04:40:59.409653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.435647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.435697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.457 [2024-11-03 04:40:59.435711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.976 ms 00:20:36.457 [2024-11-03 04:40:59.435719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.435815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.457 [2024-11-03 04:40:59.435825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.457 [2024-11-03 04:40:59.435835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:36.457 [2024-11-03 04:40:59.435844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.457 [2024-11-03 04:40:59.438038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.086 ms, result 0 00:20:37.402  [2024-11-03T04:41:01.874Z] Copying: 10200/1048576 [kB] (10200 kBps) [2024-11-03T04:41:02.818Z] Copying: 20196/1048576 [kB] (9996 kBps) [2024-11-03T04:41:03.765Z] Copying: 33/1024 [MB] (14 MBps) [2024-11-03T04:41:04.711Z] Copying: 47/1024 [MB] (14 MBps) [2024-11-03T04:41:05.656Z] Copying: 59164/1048576 [kB] (10184 kBps) [2024-11-03T04:41:06.596Z] Copying: 67/1024 [MB] (10 MBps) [2024-11-03T04:41:07.543Z] Copying: 83/1024 [MB] (15 MBps) [2024-11-03T04:41:08.488Z] Copying: 97/1024 [MB] (14 MBps) [2024-11-03T04:41:09.869Z] Copying: 113/1024 [MB] (16 MBps) [2024-11-03T04:41:10.814Z] Copying: 131/1024 [MB] (17 MBps) [2024-11-03T04:41:11.756Z] Copying: 142/1024 [MB] (11 MBps) [2024-11-03T04:41:12.700Z] Copying: 165/1024 [MB] (23 MBps) [2024-11-03T04:41:13.642Z] Copying: 177/1024 [MB] (11 MBps) [2024-11-03T04:41:14.584Z] Copying: 195/1024 [MB] (18 MBps) [2024-11-03T04:41:15.524Z] Copying: 216/1024 [MB] (21 MBps) [2024-11-03T04:41:16.529Z] Copying: 230/1024 [MB] (13 MBps) [2024-11-03T04:41:17.468Z] Copying: 246/1024 [MB] (16 MBps) [2024-11-03T04:41:18.856Z] Copying: 269/1024 [MB] (22 MBps) [2024-11-03T04:41:19.800Z] Copying: 289/1024 [MB] (19 MBps) [2024-11-03T04:41:20.743Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-03T04:41:21.686Z] Copying: 327/1024 [MB] (19 MBps) [2024-11-03T04:41:22.622Z] Copying: 346/1024 [MB] (19 MBps) [2024-11-03T04:41:23.562Z] Copying: 394/1024 [MB] (47 MBps) [2024-11-03T04:41:24.506Z] Copying: 417/1024 [MB] (23 MBps) [2024-11-03T04:41:25.892Z] Copying: 432/1024 [MB] (14 MBps) [2024-11-03T04:41:26.465Z] Copying: 446/1024 [MB] (14 MBps) [2024-11-03T04:41:27.853Z] Copying: 461/1024 [MB] (14 MBps) [2024-11-03T04:41:28.801Z] Copying: 475/1024 [MB] (14 MBps) [2024-11-03T04:41:29.746Z] Copying: 487/1024 [MB] (11 MBps) [2024-11-03T04:41:30.688Z] Copying: 505/1024 [MB] (17 MBps) [2024-11-03T04:41:31.630Z] Copying: 521/1024 [MB] (16 MBps) [2024-11-03T04:41:32.573Z] Copying: 544/1024 [MB] (22 MBps) [2024-11-03T04:41:33.518Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-03T04:41:34.463Z] Copying: 582088/1048576 [kB] (10180 kBps) [2024-11-03T04:41:35.852Z] Copying: 592168/1048576 [kB] (10080 kBps) [2024-11-03T04:41:36.839Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-03T04:41:37.785Z] Copying: 598/1024 [MB] (10 MBps) [2024-11-03T04:41:38.729Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-03T04:41:39.676Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-03T04:41:40.620Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-03T04:41:41.560Z] Copying: 647/1024 [MB] (10 MBps) [2024-11-03T04:41:42.504Z] Copying: 666/1024 [MB] (19 MBps) [2024-11-03T04:41:43.893Z] Copying: 683/1024 [MB] (16 MBps) [2024-11-03T04:41:44.464Z] Copying: 700/1024 [MB] (17 MBps) [2024-11-03T04:41:45.852Z] Copying: 714/1024 [MB] (14 MBps) [2024-11-03T04:41:46.795Z] Copying: 737/1024 [MB] (22 MBps) [2024-11-03T04:41:47.740Z] Copying: 756/1024 [MB] (19 MBps) [2024-11-03T04:41:48.685Z] Copying: 772/1024 [MB] (15 MBps) [2024-11-03T04:41:49.627Z] Copying: 786/1024 [MB] (14 MBps) [2024-11-03T04:41:50.571Z] Copying: 803/1024 [MB] (17 MBps) [2024-11-03T04:41:51.515Z] Copying: 824/1024 [MB] (20 MBps) [2024-11-03T04:41:52.460Z] Copying: 840/1024 [MB] (16 MBps) [2024-11-03T04:41:53.848Z] Copying: 855/1024 [MB] (14 MBps) [2024-11-03T04:41:54.790Z] Copying: 869/1024 [MB] (14 MBps) [2024-11-03T04:41:55.735Z] Copying: 891/1024 [MB] (21 MBps) [2024-11-03T04:41:56.677Z] Copying: 905/1024 [MB] (14 MBps) [2024-11-03T04:41:57.656Z] Copying: 917/1024 [MB] (12 MBps) [2024-11-03T04:41:58.601Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-03T04:41:59.546Z] Copying: 938/1024 [MB] (10 MBps) [2024-11-03T04:42:00.488Z] Copying: 951/1024 [MB] (12 MBps) [2024-11-03T04:42:01.873Z] Copying: 968/1024 [MB] (16 MBps) [2024-11-03T04:42:02.810Z] Copying: 996/1024 [MB] (28 MBps) [2024-11-03T04:42:03.751Z] Copying: 1011/1024 [MB] (14 MBps) [2024-11-03T04:42:03.751Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-03T04:42:03.751Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-03 04:42:03.697734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.667 [2024-11-03 04:42:03.697810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:40.667 [2024-11-03 04:42:03.697827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:40.667 [2024-11-03 04:42:03.697836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.667 [2024-11-03 04:42:03.697971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.667 [2024-11-03 04:42:03.701025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.667 [2024-11-03 04:42:03.701065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:40.667 [2024-11-03 04:42:03.701079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:21:40.667 [2024-11-03 04:42:03.701087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.667 [2024-11-03 04:42:03.713295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.667 [2024-11-03 04:42:03.713349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:40.667 [2024-11-03 04:42:03.713360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.848 ms 00:21:40.667 [2024-11-03 04:42:03.713369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.667 [2024-11-03 04:42:03.737369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.667 [2024-11-03 04:42:03.737442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:40.667 [2024-11-03 04:42:03.737455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.973 ms 00:21:40.667 [2024-11-03 04:42:03.737464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.667 [2024-11-03 04:42:03.743624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.667 [2024-11-03 04:42:03.743667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:40.667 [2024-11-03 04:42:03.743679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:21:40.667 [2024-11-03 04:42:03.743688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.928 [2024-11-03 04:42:03.770277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.928 [2024-11-03 04:42:03.770327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:40.928 [2024-11-03 04:42:03.770339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.539 ms 00:21:40.928 [2024-11-03 04:42:03.770347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.928 [2024-11-03 04:42:03.786193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.928 [2024-11-03 04:42:03.786241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:40.928 [2024-11-03 04:42:03.786262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.800 ms 00:21:40.928 [2024-11-03 04:42:03.786272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.928 [2024-11-03 04:42:03.939913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.928 [2024-11-03 04:42:03.939981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:40.928 [2024-11-03 04:42:03.939993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 153.610 ms 00:21:40.928 [2024-11-03 04:42:03.940002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.928 [2024-11-03 04:42:03.965843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.928 [2024-11-03 04:42:03.965892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:40.928 [2024-11-03 04:42:03.965903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.824 ms 00:21:40.928 [2024-11-03 04:42:03.965911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.928 [2024-11-03 04:42:03.991466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.928 [2024-11-03 04:42:03.991527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:40.928 [2024-11-03 04:42:03.991538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.509 ms 00:21:40.928 [2024-11-03 04:42:03.991546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.191 [2024-11-03 04:42:04.016367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.191 [2024-11-03 04:42:04.016416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.191 [2024-11-03 04:42:04.016427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.759 ms 00:21:41.191 [2024-11-03 04:42:04.016435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.191 [2024-11-03 04:42:04.041105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.191 [2024-11-03 04:42:04.041152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.191 [2024-11-03 04:42:04.041163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.599 ms 00:21:41.191 [2024-11-03 04:42:04.041171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.191 [2024-11-03 04:42:04.041214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.191 [2024-11-03 04:42:04.041230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97024 / 261120 wr_cnt: 1 state: open 00:21:41.191 [2024-11-03 04:42:04.041241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.191 [2024-11-03 04:42:04.041769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.041993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.192 [2024-11-03 04:42:04.042087] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.192 [2024-11-03 04:42:04.042097] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73eb68bc-cabf-4fe5-8508-e3167e0524d2 00:21:41.192 [2024-11-03 04:42:04.042105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97024 00:21:41.192 [2024-11-03 04:42:04.042114] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97984 00:21:41.192 [2024-11-03 04:42:04.042121] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97024 00:21:41.192 [2024-11-03 04:42:04.042130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:21:41.192 [2024-11-03 04:42:04.042137] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.192 [2024-11-03 04:42:04.042146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.192 [2024-11-03 04:42:04.042167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.192 [2024-11-03 04:42:04.042175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.192 [2024-11-03 04:42:04.042181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.192 [2024-11-03 04:42:04.042190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.192 [2024-11-03 04:42:04.042198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.192 [2024-11-03 04:42:04.042208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:21:41.192 [2024-11-03 04:42:04.042215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.055789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.192 [2024-11-03 04:42:04.055836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.192 [2024-11-03 04:42:04.055847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.541 ms 00:21:41.192 [2024-11-03 04:42:04.055856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.056247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.192 [2024-11-03 04:42:04.056322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.192 [2024-11-03 04:42:04.056331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:21:41.192 [2024-11-03 04:42:04.056339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.092602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.092664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.192 [2024-11-03 04:42:04.092684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.092694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.092768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.092779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.192 [2024-11-03 04:42:04.092789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.092799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.092878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.092892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.192 [2024-11-03 04:42:04.092902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.092916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.092934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.092945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.192 [2024-11-03 04:42:04.092954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.092963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.178179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.178237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.192 [2024-11-03 04:42:04.178251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.178266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.247901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.247963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.192 [2024-11-03 04:42:04.247974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.247983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.248038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.248048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.192 [2024-11-03 04:42:04.248058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.248066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.248128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.248140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.192 [2024-11-03 04:42:04.248149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.248157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.248256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.248268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.192 [2024-11-03 04:42:04.248277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.248286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.248317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.248330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.192 [2024-11-03 04:42:04.248339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.248347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.248390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.192 [2024-11-03 04:42:04.248401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.192 [2024-11-03 04:42:04.248411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.192 [2024-11-03 04:42:04.248418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.192 [2024-11-03 04:42:04.248469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.193 [2024-11-03 04:42:04.248481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.193 [2024-11-03 04:42:04.248491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.193 [2024-11-03 04:42:04.248499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.193 [2024-11-03 04:42:04.248675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.473 ms, result 0 00:21:42.579 00:21:42.579 00:21:42.579 04:42:05 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:42.841 [2024-11-03 04:42:05.665725] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:21:42.841 [2024-11-03 04:42:05.665877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76989 ] 00:21:42.841 [2024-11-03 04:42:05.830523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.102 [2024-11-03 04:42:05.951715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.364 [2024-11-03 04:42:06.240274] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.364 [2024-11-03 04:42:06.240357] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.364 [2024-11-03 04:42:06.401047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.401111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.364 [2024-11-03 04:42:06.401130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.364 [2024-11-03 04:42:06.401140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.401195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.401206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.364 [2024-11-03 04:42:06.401217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:43.364 [2024-11-03 04:42:06.401226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.401247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.364 [2024-11-03 04:42:06.402100] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.364 [2024-11-03 04:42:06.402147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.402156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.364 [2024-11-03 04:42:06.402166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:21:43.364 [2024-11-03 04:42:06.402175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.404414] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.364 [2024-11-03 04:42:06.418588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.418643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.364 [2024-11-03 04:42:06.418659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.176 ms 00:21:43.364 [2024-11-03 04:42:06.418668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.418743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.418757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.364 [2024-11-03 04:42:06.418767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:43.364 [2024-11-03 04:42:06.418775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.426731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.426772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.364 [2024-11-03 04:42:06.426783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.875 ms 00:21:43.364 [2024-11-03 04:42:06.426792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.426877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.426887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.364 [2024-11-03 04:42:06.426896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:43.364 [2024-11-03 04:42:06.426905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.426950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.426962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.364 [2024-11-03 04:42:06.426971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:43.364 [2024-11-03 04:42:06.426980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.427006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:43.364 [2024-11-03 04:42:06.430981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.431021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.364 [2024-11-03 04:42:06.431031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.982 ms 00:21:43.364 [2024-11-03 04:42:06.431043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.431077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.364 [2024-11-03 04:42:06.431086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.364 [2024-11-03 04:42:06.431095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:43.364 [2024-11-03 04:42:06.431103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.364 [2024-11-03 04:42:06.431153] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.365 [2024-11-03 04:42:06.431175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.365 [2024-11-03 04:42:06.431214] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.365 [2024-11-03 04:42:06.431233] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:43.365 [2024-11-03 04:42:06.431340] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.365 [2024-11-03 04:42:06.431352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.365 [2024-11-03 04:42:06.431364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:43.365 [2024-11-03 04:42:06.431376] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431385] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431396] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:43.365 [2024-11-03 04:42:06.431405] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.365 [2024-11-03 04:42:06.431414] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.365 [2024-11-03 04:42:06.431422] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.365 [2024-11-03 04:42:06.431434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.365 [2024-11-03 04:42:06.431441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.365 [2024-11-03 04:42:06.431450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:21:43.365 [2024-11-03 04:42:06.431457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.365 [2024-11-03 04:42:06.431544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.365 [2024-11-03 04:42:06.431554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.365 [2024-11-03 04:42:06.431587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:43.365 [2024-11-03 04:42:06.431595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.365 [2024-11-03 04:42:06.431701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.365 [2024-11-03 04:42:06.431716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.365 [2024-11-03 04:42:06.431727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.365 [2024-11-03 04:42:06.431753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.365 [2024-11-03 04:42:06.431776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.365 [2024-11-03 04:42:06.431791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.365 [2024-11-03 04:42:06.431801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:43.365 [2024-11-03 04:42:06.431809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.365 [2024-11-03 04:42:06.431816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.365 [2024-11-03 04:42:06.431823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:43.365 [2024-11-03 04:42:06.431836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.365 [2024-11-03 04:42:06.431849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.365 [2024-11-03 04:42:06.431872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.365 [2024-11-03 04:42:06.431892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.365 [2024-11-03 04:42:06.431915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.365 [2024-11-03 04:42:06.431934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.365 [2024-11-03 04:42:06.431948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.365 [2024-11-03 04:42:06.431954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:43.365 [2024-11-03 04:42:06.431960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.365 [2024-11-03 04:42:06.431967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.365 [2024-11-03 04:42:06.431974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:43.365 [2024-11-03 04:42:06.431981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.365 [2024-11-03 04:42:06.431987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.365 [2024-11-03 04:42:06.431993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:43.365 [2024-11-03 04:42:06.432000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.432006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.365 [2024-11-03 04:42:06.432012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:43.365 [2024-11-03 04:42:06.432018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.432028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.365 [2024-11-03 04:42:06.432037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.365 [2024-11-03 04:42:06.432044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.365 [2024-11-03 04:42:06.432052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.365 [2024-11-03 04:42:06.432061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.365 [2024-11-03 04:42:06.432068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.365 [2024-11-03 04:42:06.432075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.365 [2024-11-03 04:42:06.432083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.365 [2024-11-03 04:42:06.432090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.365 [2024-11-03 04:42:06.432097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.365 [2024-11-03 04:42:06.432106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.365 [2024-11-03 04:42:06.432116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:43.365 [2024-11-03 04:42:06.432131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:43.365 [2024-11-03 04:42:06.432138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:43.365 [2024-11-03 04:42:06.432147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:43.365 [2024-11-03 04:42:06.432154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:43.365 [2024-11-03 04:42:06.432161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:43.365 [2024-11-03 04:42:06.432168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:43.365 [2024-11-03 04:42:06.432176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:43.365 [2024-11-03 04:42:06.432184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:43.365 [2024-11-03 04:42:06.432191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:43.365 [2024-11-03 04:42:06.432227] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.365 [2024-11-03 04:42:06.432236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.365 [2024-11-03 04:42:06.432253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.365 [2024-11-03 04:42:06.432261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.365 [2024-11-03 04:42:06.432268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.365 [2024-11-03 04:42:06.432279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.365 [2024-11-03 04:42:06.432290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.365 [2024-11-03 04:42:06.432299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:21:43.365 [2024-11-03 04:42:06.432306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.463842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.463893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.627 [2024-11-03 04:42:06.463905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:21:43.627 [2024-11-03 04:42:06.463914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.464004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.464018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.627 [2024-11-03 04:42:06.464026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:43.627 [2024-11-03 04:42:06.464034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.508643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.508726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.627 [2024-11-03 04:42:06.508740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.550 ms 00:21:43.627 [2024-11-03 04:42:06.508750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.508798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.508808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.627 [2024-11-03 04:42:06.508818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.627 [2024-11-03 04:42:06.508830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.509409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.509456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.627 [2024-11-03 04:42:06.509468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:21:43.627 [2024-11-03 04:42:06.509476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.509654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.509668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.627 [2024-11-03 04:42:06.509677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:21:43.627 [2024-11-03 04:42:06.509685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.525075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.525120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.627 [2024-11-03 04:42:06.525131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.363 ms 00:21:43.627 [2024-11-03 04:42:06.525141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.539351] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:43.627 [2024-11-03 04:42:06.539405] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.627 [2024-11-03 04:42:06.539418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.539427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.627 [2024-11-03 04:42:06.539437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:21:43.627 [2024-11-03 04:42:06.539445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.564948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.565006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.627 [2024-11-03 04:42:06.565017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.449 ms 00:21:43.627 [2024-11-03 04:42:06.565026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.577772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.577831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.627 [2024-11-03 04:42:06.577843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.690 ms 00:21:43.627 [2024-11-03 04:42:06.577851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.590012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.590059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.627 [2024-11-03 04:42:06.590071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.115 ms 00:21:43.627 [2024-11-03 04:42:06.590079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.590748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.590783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.627 [2024-11-03 04:42:06.590794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:21:43.627 [2024-11-03 04:42:06.590803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.654411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.654477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.627 [2024-11-03 04:42:06.654493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.584 ms 00:21:43.627 [2024-11-03 04:42:06.654509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.665770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:43.627 [2024-11-03 04:42:06.668604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.668657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.627 [2024-11-03 04:42:06.668670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.018 ms 00:21:43.627 [2024-11-03 04:42:06.668679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.668765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.668777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:43.627 [2024-11-03 04:42:06.668787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:43.627 [2024-11-03 04:42:06.668796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.670407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.670457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:43.627 [2024-11-03 04:42:06.670469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:21:43.627 [2024-11-03 04:42:06.670478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.670507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.670517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:43.627 [2024-11-03 04:42:06.670526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.627 [2024-11-03 04:42:06.670534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.670593] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:43.627 [2024-11-03 04:42:06.670607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.670616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:43.627 [2024-11-03 04:42:06.670627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:43.627 [2024-11-03 04:42:06.670635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.696519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.696580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:43.627 [2024-11-03 04:42:06.696595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.864 ms 00:21:43.627 [2024-11-03 04:42:06.696604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.696709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.627 [2024-11-03 04:42:06.696721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:43.627 [2024-11-03 04:42:06.696730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:43.627 [2024-11-03 04:42:06.696739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.627 [2024-11-03 04:42:06.698012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.437 ms, result 0 00:21:45.015  [2024-11-03T04:42:09.043Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-03T04:42:09.981Z] Copying: 35/1024 [MB] (22 MBps) [2024-11-03T04:42:10.924Z] Copying: 56/1024 [MB] (20 MBps) [2024-11-03T04:42:12.309Z] Copying: 78/1024 [MB] (21 MBps) [2024-11-03T04:42:13.257Z] Copying: 98/1024 [MB] (20 MBps) [2024-11-03T04:42:14.203Z] Copying: 119/1024 [MB] (20 MBps) [2024-11-03T04:42:15.148Z] Copying: 138/1024 [MB] (18 MBps) [2024-11-03T04:42:16.119Z] Copying: 149/1024 [MB] (11 MBps) [2024-11-03T04:42:17.065Z] Copying: 161/1024 [MB] (11 MBps) [2024-11-03T04:42:18.012Z] Copying: 171/1024 [MB] (10 MBps) [2024-11-03T04:42:18.958Z] Copying: 182/1024 [MB] (10 MBps) [2024-11-03T04:42:19.896Z] Copying: 192/1024 [MB] (10 MBps) [2024-11-03T04:42:21.290Z] Copying: 210/1024 [MB] (17 MBps) [2024-11-03T04:42:22.235Z] Copying: 220/1024 [MB] (10 MBps) [2024-11-03T04:42:23.178Z] Copying: 231/1024 [MB] (10 MBps) [2024-11-03T04:42:24.123Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-03T04:42:25.069Z] Copying: 252/1024 [MB] (10 MBps) [2024-11-03T04:42:26.015Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-03T04:42:26.959Z] Copying: 273/1024 [MB] (10 MBps) [2024-11-03T04:42:27.902Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-03T04:42:29.290Z] Copying: 294/1024 [MB] (10 MBps) [2024-11-03T04:42:30.231Z] Copying: 309/1024 [MB] (14 MBps) [2024-11-03T04:42:31.175Z] Copying: 321/1024 [MB] (12 MBps) [2024-11-03T04:42:32.120Z] Copying: 332/1024 [MB] (11 MBps) [2024-11-03T04:42:33.065Z] Copying: 346/1024 [MB] (14 MBps) [2024-11-03T04:42:34.009Z] Copying: 360/1024 [MB] (13 MBps) [2024-11-03T04:42:34.954Z] Copying: 376/1024 [MB] (15 MBps) [2024-11-03T04:42:35.899Z] Copying: 395/1024 [MB] (18 MBps) [2024-11-03T04:42:37.314Z] Copying: 407/1024 [MB] (12 MBps) [2024-11-03T04:42:38.259Z] Copying: 422/1024 [MB] (15 MBps) [2024-11-03T04:42:39.204Z] Copying: 439/1024 [MB] (16 MBps) [2024-11-03T04:42:40.151Z] Copying: 449/1024 [MB] (10 MBps) [2024-11-03T04:42:41.096Z] Copying: 464/1024 [MB] (14 MBps) [2024-11-03T04:42:42.041Z] Copying: 474/1024 [MB] (10 MBps) [2024-11-03T04:42:42.980Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-03T04:42:43.920Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-03T04:42:45.312Z] Copying: 519/1024 [MB] (23 MBps) [2024-11-03T04:42:46.255Z] Copying: 529/1024 [MB] (10 MBps) [2024-11-03T04:42:47.198Z] Copying: 543/1024 [MB] (14 MBps) [2024-11-03T04:42:48.141Z] Copying: 559/1024 [MB] (15 MBps) [2024-11-03T04:42:49.087Z] Copying: 572/1024 [MB] (13 MBps) [2024-11-03T04:42:50.028Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-03T04:42:50.971Z] Copying: 596/1024 [MB] (13 MBps) [2024-11-03T04:42:51.915Z] Copying: 613/1024 [MB] (17 MBps) [2024-11-03T04:42:53.303Z] Copying: 627/1024 [MB] (13 MBps) [2024-11-03T04:42:54.246Z] Copying: 637/1024 [MB] (10 MBps) [2024-11-03T04:42:55.188Z] Copying: 649/1024 [MB] (11 MBps) [2024-11-03T04:42:56.137Z] Copying: 669/1024 [MB] (20 MBps) [2024-11-03T04:42:57.151Z] Copying: 683/1024 [MB] (13 MBps) [2024-11-03T04:42:58.096Z] Copying: 703/1024 [MB] (20 MBps) [2024-11-03T04:42:59.040Z] Copying: 721/1024 [MB] (17 MBps) [2024-11-03T04:42:59.982Z] Copying: 733/1024 [MB] (12 MBps) [2024-11-03T04:43:00.926Z] Copying: 747/1024 [MB] (14 MBps) [2024-11-03T04:43:02.310Z] Copying: 765/1024 [MB] (18 MBps) [2024-11-03T04:43:03.251Z] Copying: 776/1024 [MB] (11 MBps) [2024-11-03T04:43:04.192Z] Copying: 787/1024 [MB] (10 MBps) [2024-11-03T04:43:05.133Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-03T04:43:06.073Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-03T04:43:07.011Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-03T04:43:07.955Z] Copying: 836/1024 [MB] (17 MBps) [2024-11-03T04:43:08.896Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-03T04:43:10.279Z] Copying: 863/1024 [MB] (16 MBps) [2024-11-03T04:43:11.222Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-03T04:43:12.164Z] Copying: 894/1024 [MB] (19 MBps) [2024-11-03T04:43:13.107Z] Copying: 914/1024 [MB] (20 MBps) [2024-11-03T04:43:14.048Z] Copying: 927/1024 [MB] (12 MBps) [2024-11-03T04:43:14.987Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-03T04:43:15.927Z] Copying: 959/1024 [MB] (21 MBps) [2024-11-03T04:43:16.916Z] Copying: 976/1024 [MB] (17 MBps) [2024-11-03T04:43:18.307Z] Copying: 990/1024 [MB] (13 MBps) [2024-11-03T04:43:18.880Z] Copying: 1004/1024 [MB] (14 MBps) [2024-11-03T04:43:18.880Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 04:43:18.820309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.796 [2024-11-03 04:43:18.820390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.796 [2024-11-03 04:43:18.820407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:55.796 [2024-11-03 04:43:18.820417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.796 [2024-11-03 04:43:18.820447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.796 [2024-11-03 04:43:18.823520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.796 [2024-11-03 04:43:18.823588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.796 [2024-11-03 04:43:18.823601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:22:55.796 [2024-11-03 04:43:18.823612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.796 [2024-11-03 04:43:18.823857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.796 [2024-11-03 04:43:18.823870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.796 [2024-11-03 04:43:18.823880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:22:55.796 [2024-11-03 04:43:18.823889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.796 [2024-11-03 04:43:18.830767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.796 [2024-11-03 04:43:18.830825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.796 [2024-11-03 04:43:18.830836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:22:55.796 [2024-11-03 04:43:18.830845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.796 [2024-11-03 04:43:18.837049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.796 [2024-11-03 04:43:18.837095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.796 [2024-11-03 04:43:18.837106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:22:55.796 [2024-11-03 04:43:18.837115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.796 [2024-11-03 04:43:18.864430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.796 [2024-11-03 04:43:18.864478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.796 [2024-11-03 04:43:18.864491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.256 ms 00:22:55.796 [2024-11-03 04:43:18.864500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.058 [2024-11-03 04:43:18.881138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.058 [2024-11-03 04:43:18.881189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:56.058 [2024-11-03 04:43:18.881211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.588 ms 00:22:56.058 [2024-11-03 04:43:18.881220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.058 [2024-11-03 04:43:19.102652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.058 [2024-11-03 04:43:19.102729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:56.058 [2024-11-03 04:43:19.102742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 221.377 ms 00:22:56.058 [2024-11-03 04:43:19.102751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.058 [2024-11-03 04:43:19.129470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.058 [2024-11-03 04:43:19.129522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:56.058 [2024-11-03 04:43:19.129535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.702 ms 00:22:56.058 [2024-11-03 04:43:19.129543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.320 [2024-11-03 04:43:19.155211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.320 [2024-11-03 04:43:19.155259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:56.320 [2024-11-03 04:43:19.155285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.610 ms 00:22:56.320 [2024-11-03 04:43:19.155292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.320 [2024-11-03 04:43:19.180436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.320 [2024-11-03 04:43:19.180482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:56.320 [2024-11-03 04:43:19.180493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.096 ms 00:22:56.320 [2024-11-03 04:43:19.180501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.320 [2024-11-03 04:43:19.205212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.320 [2024-11-03 04:43:19.205258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:56.320 [2024-11-03 04:43:19.205271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.612 ms 00:22:56.320 [2024-11-03 04:43:19.205279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.320 [2024-11-03 04:43:19.205325] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:56.320 [2024-11-03 04:43:19.205342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:22:56.320 [2024-11-03 04:43:19.205354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:56.320 [2024-11-03 04:43:19.205363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:56.320 [2024-11-03 04:43:19.205373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.205999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:56.321 [2024-11-03 04:43:19.206145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:56.322 [2024-11-03 04:43:19.206220] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:56.322 [2024-11-03 04:43:19.206228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73eb68bc-cabf-4fe5-8508-e3167e0524d2 00:22:56.322 [2024-11-03 04:43:19.206237] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:22:56.322 [2024-11-03 04:43:19.206247] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35776 00:22:56.322 [2024-11-03 04:43:19.206255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34816 00:22:56.322 [2024-11-03 04:43:19.206264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0276 00:22:56.322 [2024-11-03 04:43:19.206271] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:56.322 [2024-11-03 04:43:19.206279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:56.322 [2024-11-03 04:43:19.206289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:56.322 [2024-11-03 04:43:19.206303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:56.322 [2024-11-03 04:43:19.206310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:56.322 [2024-11-03 04:43:19.206318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.322 [2024-11-03 04:43:19.206326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:56.322 [2024-11-03 04:43:19.206334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:22:56.322 [2024-11-03 04:43:19.206342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.220125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.322 [2024-11-03 04:43:19.220171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:56.322 [2024-11-03 04:43:19.220183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.764 ms 00:22:56.322 [2024-11-03 04:43:19.220192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.220633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.322 [2024-11-03 04:43:19.220655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:56.322 [2024-11-03 04:43:19.220666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:22:56.322 [2024-11-03 04:43:19.220674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.257264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.322 [2024-11-03 04:43:19.257317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.322 [2024-11-03 04:43:19.257336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.322 [2024-11-03 04:43:19.257346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.257410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.322 [2024-11-03 04:43:19.257421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.322 [2024-11-03 04:43:19.257430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.322 [2024-11-03 04:43:19.257440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.257511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.322 [2024-11-03 04:43:19.257524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.322 [2024-11-03 04:43:19.257535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.322 [2024-11-03 04:43:19.257549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.257585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.322 [2024-11-03 04:43:19.257595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.322 [2024-11-03 04:43:19.257606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.322 [2024-11-03 04:43:19.257614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.322 [2024-11-03 04:43:19.343360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.322 [2024-11-03 04:43:19.343417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.322 [2024-11-03 04:43:19.343431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.322 [2024-11-03 04:43:19.343445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.583 [2024-11-03 04:43:19.413279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.583 [2024-11-03 04:43:19.413335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.583 [2024-11-03 04:43:19.413348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.413431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.584 [2024-11-03 04:43:19.413441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.584 [2024-11-03 04:43:19.413450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.413506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.584 [2024-11-03 04:43:19.413517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.584 [2024-11-03 04:43:19.413526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.413657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.584 [2024-11-03 04:43:19.413672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.584 [2024-11-03 04:43:19.413681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.413722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.584 [2024-11-03 04:43:19.413736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:56.584 [2024-11-03 04:43:19.413745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.413792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.584 [2024-11-03 04:43:19.413803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.584 [2024-11-03 04:43:19.413812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.413869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.584 [2024-11-03 04:43:19.413880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.584 [2024-11-03 04:43:19.413889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.584 [2024-11-03 04:43:19.413897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.584 [2024-11-03 04:43:19.414029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.688 ms, result 0 00:22:57.155 00:22:57.155 00:22:57.155 04:43:20 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:59.701 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74589 00:22:59.701 04:43:22 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74589 ']' 00:22:59.701 04:43:22 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74589 00:22:59.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74589) - No such process 00:22:59.701 Process with pid 74589 is not found 00:22:59.701 04:43:22 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74589 is not found' 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:59.701 Remove shared memory files 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:59.701 04:43:22 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:59.701 ************************************ 00:22:59.701 00:22:59.701 real 5m9.223s 00:22:59.701 user 4m56.215s 00:22:59.701 sys 0m12.379s 00:22:59.701 04:43:22 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.701 04:43:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:59.701 END TEST ftl_restore 00:22:59.701 ************************************ 00:22:59.702 04:43:22 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:59.702 04:43:22 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:59.702 04:43:22 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.702 04:43:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:59.702 ************************************ 00:22:59.702 START TEST ftl_dirty_shutdown 00:22:59.702 ************************************ 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:59.702 * Looking for test storage... 00:22:59.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.702 --rc genhtml_branch_coverage=1 00:22:59.702 --rc genhtml_function_coverage=1 00:22:59.702 --rc genhtml_legend=1 00:22:59.702 --rc geninfo_all_blocks=1 00:22:59.702 --rc geninfo_unexecuted_blocks=1 00:22:59.702 00:22:59.702 ' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.702 --rc genhtml_branch_coverage=1 00:22:59.702 --rc genhtml_function_coverage=1 00:22:59.702 --rc genhtml_legend=1 00:22:59.702 --rc geninfo_all_blocks=1 00:22:59.702 --rc geninfo_unexecuted_blocks=1 00:22:59.702 00:22:59.702 ' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.702 --rc genhtml_branch_coverage=1 00:22:59.702 --rc genhtml_function_coverage=1 00:22:59.702 --rc genhtml_legend=1 00:22:59.702 --rc geninfo_all_blocks=1 00:22:59.702 --rc geninfo_unexecuted_blocks=1 00:22:59.702 00:22:59.702 ' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.702 --rc genhtml_branch_coverage=1 00:22:59.702 --rc genhtml_function_coverage=1 00:22:59.702 --rc genhtml_legend=1 00:22:59.702 --rc geninfo_all_blocks=1 00:22:59.702 --rc geninfo_unexecuted_blocks=1 00:22:59.702 00:22:59.702 ' 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:59.702 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77841 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77841 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 77841 ']' 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.962 04:43:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:59.962 [2024-11-03 04:43:22.887296] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:22:59.962 [2024-11-03 04:43:22.887442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77841 ] 00:23:00.224 [2024-11-03 04:43:23.054766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.224 [2024-11-03 04:43:23.173987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.166 04:43:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.166 04:43:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:01.167 04:43:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:01.167 04:43:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:01.167 04:43:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:01.167 04:43:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:01.167 04:43:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:01.167 04:43:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:01.167 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:01.428 { 00:23:01.428 "name": "nvme0n1", 00:23:01.428 "aliases": [ 00:23:01.428 "27098a7f-8fa2-4b58-b22c-35df9891a31d" 00:23:01.428 ], 00:23:01.428 "product_name": "NVMe disk", 00:23:01.428 "block_size": 4096, 00:23:01.428 "num_blocks": 1310720, 00:23:01.428 "uuid": "27098a7f-8fa2-4b58-b22c-35df9891a31d", 00:23:01.428 "numa_id": -1, 00:23:01.428 "assigned_rate_limits": { 00:23:01.428 "rw_ios_per_sec": 0, 00:23:01.428 "rw_mbytes_per_sec": 0, 00:23:01.428 "r_mbytes_per_sec": 0, 00:23:01.428 "w_mbytes_per_sec": 0 00:23:01.428 }, 00:23:01.428 "claimed": true, 00:23:01.428 "claim_type": "read_many_write_one", 00:23:01.428 "zoned": false, 00:23:01.428 "supported_io_types": { 00:23:01.428 "read": true, 00:23:01.428 "write": true, 00:23:01.428 "unmap": true, 00:23:01.428 "flush": true, 00:23:01.428 "reset": true, 00:23:01.428 "nvme_admin": true, 00:23:01.428 "nvme_io": true, 00:23:01.428 "nvme_io_md": false, 00:23:01.428 "write_zeroes": true, 00:23:01.428 "zcopy": false, 00:23:01.428 "get_zone_info": false, 00:23:01.428 "zone_management": false, 00:23:01.428 "zone_append": false, 00:23:01.428 "compare": true, 00:23:01.428 "compare_and_write": false, 00:23:01.428 "abort": true, 00:23:01.428 "seek_hole": false, 00:23:01.428 "seek_data": false, 00:23:01.428 "copy": true, 00:23:01.428 "nvme_iov_md": false 00:23:01.428 }, 00:23:01.428 "driver_specific": { 00:23:01.428 "nvme": [ 00:23:01.428 { 00:23:01.428 "pci_address": "0000:00:11.0", 00:23:01.428 "trid": { 00:23:01.428 "trtype": "PCIe", 00:23:01.428 "traddr": "0000:00:11.0" 00:23:01.428 }, 00:23:01.428 "ctrlr_data": { 00:23:01.428 "cntlid": 0, 00:23:01.428 "vendor_id": "0x1b36", 00:23:01.428 "model_number": "QEMU NVMe Ctrl", 00:23:01.428 "serial_number": "12341", 00:23:01.428 "firmware_revision": "8.0.0", 00:23:01.428 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:01.428 "oacs": { 00:23:01.428 "security": 0, 00:23:01.428 "format": 1, 00:23:01.428 "firmware": 0, 00:23:01.428 "ns_manage": 1 00:23:01.428 }, 00:23:01.428 "multi_ctrlr": false, 00:23:01.428 "ana_reporting": false 00:23:01.428 }, 00:23:01.428 "vs": { 00:23:01.428 "nvme_version": "1.4" 00:23:01.428 }, 00:23:01.428 "ns_data": { 00:23:01.428 "id": 1, 00:23:01.428 "can_share": false 00:23:01.428 } 00:23:01.428 } 00:23:01.428 ], 00:23:01.428 "mp_policy": "active_passive" 00:23:01.428 } 00:23:01.428 } 00:23:01.428 ]' 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:01.428 04:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:01.429 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:01.429 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:01.429 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:01.429 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:01.429 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:01.690 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=60d4ea42-5c3b-4cd8-a828-a08d077aa7f9 00:23:01.690 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:01.690 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60d4ea42-5c3b-4cd8-a828-a08d077aa7f9 00:23:01.952 04:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:02.212 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9683a622-9dca-4514-8196-df193d14c71f 00:23:02.212 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9683a622-9dca-4514-8196-df193d14c71f 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:02.474 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:02.733 { 00:23:02.733 "name": "7396877a-6dc0-42be-936b-0453b1ba6989", 00:23:02.733 "aliases": [ 00:23:02.733 "lvs/nvme0n1p0" 00:23:02.733 ], 00:23:02.733 "product_name": "Logical Volume", 00:23:02.733 "block_size": 4096, 00:23:02.733 "num_blocks": 26476544, 00:23:02.733 "uuid": "7396877a-6dc0-42be-936b-0453b1ba6989", 00:23:02.733 "assigned_rate_limits": { 00:23:02.733 "rw_ios_per_sec": 0, 00:23:02.733 "rw_mbytes_per_sec": 0, 00:23:02.733 "r_mbytes_per_sec": 0, 00:23:02.733 "w_mbytes_per_sec": 0 00:23:02.733 }, 00:23:02.733 "claimed": false, 00:23:02.733 "zoned": false, 00:23:02.733 "supported_io_types": { 00:23:02.733 "read": true, 00:23:02.733 "write": true, 00:23:02.733 "unmap": true, 00:23:02.733 "flush": false, 00:23:02.733 "reset": true, 00:23:02.733 "nvme_admin": false, 00:23:02.733 "nvme_io": false, 00:23:02.733 "nvme_io_md": false, 00:23:02.733 "write_zeroes": true, 00:23:02.733 "zcopy": false, 00:23:02.733 "get_zone_info": false, 00:23:02.733 "zone_management": false, 00:23:02.733 "zone_append": false, 00:23:02.733 "compare": false, 00:23:02.733 "compare_and_write": false, 00:23:02.733 "abort": false, 00:23:02.733 "seek_hole": true, 00:23:02.733 "seek_data": true, 00:23:02.733 "copy": false, 00:23:02.733 "nvme_iov_md": false 00:23:02.733 }, 00:23:02.733 "driver_specific": { 00:23:02.733 "lvol": { 00:23:02.733 "lvol_store_uuid": "9683a622-9dca-4514-8196-df193d14c71f", 00:23:02.733 "base_bdev": "nvme0n1", 00:23:02.733 "thin_provision": true, 00:23:02.733 "num_allocated_clusters": 0, 00:23:02.733 "snapshot": false, 00:23:02.733 "clone": false, 00:23:02.733 "esnap_clone": false 00:23:02.733 } 00:23:02.733 } 00:23:02.733 } 00:23:02.733 ]' 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:02.733 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:02.993 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:02.993 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:02.993 04:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.993 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7396877a-6dc0-42be-936b-0453b1ba6989 00:23:02.993 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:02.994 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:02.994 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:02.994 04:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:03.255 { 00:23:03.255 "name": "7396877a-6dc0-42be-936b-0453b1ba6989", 00:23:03.255 "aliases": [ 00:23:03.255 "lvs/nvme0n1p0" 00:23:03.255 ], 00:23:03.255 "product_name": "Logical Volume", 00:23:03.255 "block_size": 4096, 00:23:03.255 "num_blocks": 26476544, 00:23:03.255 "uuid": "7396877a-6dc0-42be-936b-0453b1ba6989", 00:23:03.255 "assigned_rate_limits": { 00:23:03.255 "rw_ios_per_sec": 0, 00:23:03.255 "rw_mbytes_per_sec": 0, 00:23:03.255 "r_mbytes_per_sec": 0, 00:23:03.255 "w_mbytes_per_sec": 0 00:23:03.255 }, 00:23:03.255 "claimed": false, 00:23:03.255 "zoned": false, 00:23:03.255 "supported_io_types": { 00:23:03.255 "read": true, 00:23:03.255 "write": true, 00:23:03.255 "unmap": true, 00:23:03.255 "flush": false, 00:23:03.255 "reset": true, 00:23:03.255 "nvme_admin": false, 00:23:03.255 "nvme_io": false, 00:23:03.255 "nvme_io_md": false, 00:23:03.255 "write_zeroes": true, 00:23:03.255 "zcopy": false, 00:23:03.255 "get_zone_info": false, 00:23:03.255 "zone_management": false, 00:23:03.255 "zone_append": false, 00:23:03.255 "compare": false, 00:23:03.255 "compare_and_write": false, 00:23:03.255 "abort": false, 00:23:03.255 "seek_hole": true, 00:23:03.255 "seek_data": true, 00:23:03.255 "copy": false, 00:23:03.255 "nvme_iov_md": false 00:23:03.255 }, 00:23:03.255 "driver_specific": { 00:23:03.255 "lvol": { 00:23:03.255 "lvol_store_uuid": "9683a622-9dca-4514-8196-df193d14c71f", 00:23:03.255 "base_bdev": "nvme0n1", 00:23:03.255 "thin_provision": true, 00:23:03.255 "num_allocated_clusters": 0, 00:23:03.255 "snapshot": false, 00:23:03.255 "clone": false, 00:23:03.255 "esnap_clone": false 00:23:03.255 } 00:23:03.255 } 00:23:03.255 } 00:23:03.255 ]' 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:03.255 04:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7396877a-6dc0-42be-936b-0453b1ba6989 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:03.515 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7396877a-6dc0-42be-936b-0453b1ba6989 00:23:03.775 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:03.775 { 00:23:03.775 "name": "7396877a-6dc0-42be-936b-0453b1ba6989", 00:23:03.775 "aliases": [ 00:23:03.775 "lvs/nvme0n1p0" 00:23:03.775 ], 00:23:03.775 "product_name": "Logical Volume", 00:23:03.775 "block_size": 4096, 00:23:03.775 "num_blocks": 26476544, 00:23:03.775 "uuid": "7396877a-6dc0-42be-936b-0453b1ba6989", 00:23:03.775 "assigned_rate_limits": { 00:23:03.775 "rw_ios_per_sec": 0, 00:23:03.775 "rw_mbytes_per_sec": 0, 00:23:03.775 "r_mbytes_per_sec": 0, 00:23:03.775 "w_mbytes_per_sec": 0 00:23:03.775 }, 00:23:03.775 "claimed": false, 00:23:03.775 "zoned": false, 00:23:03.775 "supported_io_types": { 00:23:03.775 "read": true, 00:23:03.775 "write": true, 00:23:03.775 "unmap": true, 00:23:03.775 "flush": false, 00:23:03.775 "reset": true, 00:23:03.775 "nvme_admin": false, 00:23:03.775 "nvme_io": false, 00:23:03.775 "nvme_io_md": false, 00:23:03.775 "write_zeroes": true, 00:23:03.775 "zcopy": false, 00:23:03.775 "get_zone_info": false, 00:23:03.775 "zone_management": false, 00:23:03.775 "zone_append": false, 00:23:03.775 "compare": false, 00:23:03.775 "compare_and_write": false, 00:23:03.775 "abort": false, 00:23:03.775 "seek_hole": true, 00:23:03.775 "seek_data": true, 00:23:03.775 "copy": false, 00:23:03.775 "nvme_iov_md": false 00:23:03.775 }, 00:23:03.775 "driver_specific": { 00:23:03.775 "lvol": { 00:23:03.776 "lvol_store_uuid": "9683a622-9dca-4514-8196-df193d14c71f", 00:23:03.776 "base_bdev": "nvme0n1", 00:23:03.776 "thin_provision": true, 00:23:03.776 "num_allocated_clusters": 0, 00:23:03.776 "snapshot": false, 00:23:03.776 "clone": false, 00:23:03.776 "esnap_clone": false 00:23:03.776 } 00:23:03.776 } 00:23:03.776 } 00:23:03.776 ]' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7396877a-6dc0-42be-936b-0453b1ba6989 --l2p_dram_limit 10' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:03.776 04:43:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7396877a-6dc0-42be-936b-0453b1ba6989 --l2p_dram_limit 10 -c nvc0n1p0 00:23:04.037 [2024-11-03 04:43:26.875695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.037 [2024-11-03 04:43:26.875756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:04.037 [2024-11-03 04:43:26.875776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.037 [2024-11-03 04:43:26.875785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.037 [2024-11-03 04:43:26.875851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.037 [2024-11-03 04:43:26.875862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.037 [2024-11-03 04:43:26.875873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:04.037 [2024-11-03 04:43:26.875881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.037 [2024-11-03 04:43:26.875910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:04.037 [2024-11-03 04:43:26.876693] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:04.037 [2024-11-03 04:43:26.876721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.037 [2024-11-03 04:43:26.876730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.037 [2024-11-03 04:43:26.876743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:23:04.037 [2024-11-03 04:43:26.876751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.037 [2024-11-03 04:43:26.876832] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 91672ceb-899a-43cc-a534-26de9442945d 00:23:04.037 [2024-11-03 04:43:26.878505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.037 [2024-11-03 04:43:26.878709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:04.038 [2024-11-03 04:43:26.878729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:04.038 [2024-11-03 04:43:26.878744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.887094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.887140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.038 [2024-11-03 04:43:26.887151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.290 ms 00:23:04.038 [2024-11-03 04:43:26.887164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.887264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.887276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.038 [2024-11-03 04:43:26.887285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:04.038 [2024-11-03 04:43:26.887299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.887351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.887365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.038 [2024-11-03 04:43:26.887373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:04.038 [2024-11-03 04:43:26.887383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.887408] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.038 [2024-11-03 04:43:26.891819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.891859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.038 [2024-11-03 04:43:26.891872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.418 ms 00:23:04.038 [2024-11-03 04:43:26.891885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.891925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.891933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.038 [2024-11-03 04:43:26.891943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:04.038 [2024-11-03 04:43:26.891951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.891996] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:04.038 [2024-11-03 04:43:26.892141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:04.038 [2024-11-03 04:43:26.892159] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.038 [2024-11-03 04:43:26.892170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:04.038 [2024-11-03 04:43:26.892183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892203] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:04.038 [2024-11-03 04:43:26.892211] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.038 [2024-11-03 04:43:26.892221] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:04.038 [2024-11-03 04:43:26.892228] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:04.038 [2024-11-03 04:43:26.892241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.892249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.038 [2024-11-03 04:43:26.892259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:23:04.038 [2024-11-03 04:43:26.892276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.892364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.038 [2024-11-03 04:43:26.892373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.038 [2024-11-03 04:43:26.892384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:04.038 [2024-11-03 04:43:26.892391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.038 [2024-11-03 04:43:26.892490] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.038 [2024-11-03 04:43:26.892502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.038 [2024-11-03 04:43:26.892513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.038 [2024-11-03 04:43:26.892539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.038 [2024-11-03 04:43:26.892592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.038 [2024-11-03 04:43:26.892608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.038 [2024-11-03 04:43:26.892628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:04.038 [2024-11-03 04:43:26.892637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.038 [2024-11-03 04:43:26.892644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.038 [2024-11-03 04:43:26.892653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:04.038 [2024-11-03 04:43:26.892660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.038 [2024-11-03 04:43:26.892680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.038 [2024-11-03 04:43:26.892710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.038 [2024-11-03 04:43:26.892732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.038 [2024-11-03 04:43:26.892758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.038 [2024-11-03 04:43:26.892781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.038 [2024-11-03 04:43:26.892808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.038 [2024-11-03 04:43:26.892824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.038 [2024-11-03 04:43:26.892830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:04.038 [2024-11-03 04:43:26.892839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.038 [2024-11-03 04:43:26.892846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:04.038 [2024-11-03 04:43:26.892854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:04.038 [2024-11-03 04:43:26.892861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:04.038 [2024-11-03 04:43:26.892877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:04.038 [2024-11-03 04:43:26.892886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892892] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.038 [2024-11-03 04:43:26.892902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.038 [2024-11-03 04:43:26.892910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.038 [2024-11-03 04:43:26.892928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.038 [2024-11-03 04:43:26.892939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.038 [2024-11-03 04:43:26.892946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.038 [2024-11-03 04:43:26.892955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.038 [2024-11-03 04:43:26.892963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.038 [2024-11-03 04:43:26.892973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.038 [2024-11-03 04:43:26.892984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.038 [2024-11-03 04:43:26.892995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:04.039 [2024-11-03 04:43:26.893013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:04.039 [2024-11-03 04:43:26.893021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:04.039 [2024-11-03 04:43:26.893031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:04.039 [2024-11-03 04:43:26.893038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:04.039 [2024-11-03 04:43:26.893046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:04.039 [2024-11-03 04:43:26.893054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:04.039 [2024-11-03 04:43:26.893064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:04.039 [2024-11-03 04:43:26.893071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:04.039 [2024-11-03 04:43:26.893082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:04.039 [2024-11-03 04:43:26.893123] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.039 [2024-11-03 04:43:26.893134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.039 [2024-11-03 04:43:26.893154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.039 [2024-11-03 04:43:26.893161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.039 [2024-11-03 04:43:26.893172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.039 [2024-11-03 04:43:26.893180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.039 [2024-11-03 04:43:26.893189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.039 [2024-11-03 04:43:26.893197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:23:04.039 [2024-11-03 04:43:26.893206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.039 [2024-11-03 04:43:26.893246] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:04.039 [2024-11-03 04:43:26.893259] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:08.244 [2024-11-03 04:43:30.795288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.795614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:08.244 [2024-11-03 04:43:30.795785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3902.026 ms 00:23:08.244 [2024-11-03 04:43:30.795817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.828387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.828662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.244 [2024-11-03 04:43:30.828795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.282 ms 00:23:08.244 [2024-11-03 04:43:30.828832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.828994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.829223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:08.244 [2024-11-03 04:43:30.829251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:08.244 [2024-11-03 04:43:30.829277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.864803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.865010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.244 [2024-11-03 04:43:30.865144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.461 ms 00:23:08.244 [2024-11-03 04:43:30.865175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.865228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.865257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:08.244 [2024-11-03 04:43:30.865278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:08.244 [2024-11-03 04:43:30.865303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.865933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.866103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:08.244 [2024-11-03 04:43:30.866168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:23:08.244 [2024-11-03 04:43:30.866196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.866329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.866355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:08.244 [2024-11-03 04:43:30.866376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:08.244 [2024-11-03 04:43:30.866401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.884082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.884263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:08.244 [2024-11-03 04:43:30.884334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:23:08.244 [2024-11-03 04:43:30.884364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:30.897694] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:08.244 [2024-11-03 04:43:30.901514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:30.901674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:08.244 [2024-11-03 04:43:30.901744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.041 ms 00:23:08.244 [2024-11-03 04:43:30.901778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.009713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.009971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:08.244 [2024-11-03 04:43:31.010053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.883 ms 00:23:08.244 [2024-11-03 04:43:31.010080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.010301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.010389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:08.244 [2024-11-03 04:43:31.010423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:23:08.244 [2024-11-03 04:43:31.010448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.036470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.036676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:08.244 [2024-11-03 04:43:31.036706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.915 ms 00:23:08.244 [2024-11-03 04:43:31.036717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.061695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.061743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:08.244 [2024-11-03 04:43:31.061759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.922 ms 00:23:08.244 [2024-11-03 04:43:31.061766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.062391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.062410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:08.244 [2024-11-03 04:43:31.062423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:23:08.244 [2024-11-03 04:43:31.062431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.145341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.145396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:08.244 [2024-11-03 04:43:31.145418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.861 ms 00:23:08.244 [2024-11-03 04:43:31.145427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.173210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.173260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:08.244 [2024-11-03 04:43:31.173279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.703 ms 00:23:08.244 [2024-11-03 04:43:31.173288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.199325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.199505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:08.244 [2024-11-03 04:43:31.199531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.005 ms 00:23:08.244 [2024-11-03 04:43:31.199539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.225816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.225988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:08.244 [2024-11-03 04:43:31.226014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.224 ms 00:23:08.244 [2024-11-03 04:43:31.226021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.226059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.226068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:08.244 [2024-11-03 04:43:31.226083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:08.244 [2024-11-03 04:43:31.226090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.226186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.244 [2024-11-03 04:43:31.226197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:08.244 [2024-11-03 04:43:31.226209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:08.244 [2024-11-03 04:43:31.226217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.244 [2024-11-03 04:43:31.227425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4351.230 ms, result 0 00:23:08.244 { 00:23:08.244 "name": "ftl0", 00:23:08.244 "uuid": "91672ceb-899a-43cc-a534-26de9442945d" 00:23:08.244 } 00:23:08.244 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:08.245 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:08.506 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:08.506 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:08.506 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:08.767 /dev/nbd0 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:08.767 1+0 records in 00:23:08.767 1+0 records out 00:23:08.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381129 s, 10.7 MB/s 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:23:08.767 04:43:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:08.767 [2024-11-03 04:43:31.806707] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:23:08.767 [2024-11-03 04:43:31.807015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77994 ] 00:23:09.029 [2024-11-03 04:43:31.971714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.290 [2024-11-03 04:43:32.119296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.703  [2024-11-03T04:43:34.728Z] Copying: 185/1024 [MB] (185 MBps) [2024-11-03T04:43:35.662Z] Copying: 387/1024 [MB] (201 MBps) [2024-11-03T04:43:36.596Z] Copying: 642/1024 [MB] (255 MBps) [2024-11-03T04:43:37.162Z] Copying: 894/1024 [MB] (251 MBps) [2024-11-03T04:43:37.729Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:23:14.645 00:23:14.645 04:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:16.545 04:43:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:16.545 [2024-11-03 04:43:39.263438] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:23:16.545 [2024-11-03 04:43:39.263553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78071 ] 00:23:16.545 [2024-11-03 04:43:39.419067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.545 [2024-11-03 04:43:39.503730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.918  [2024-11-03T04:43:41.946Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-03T04:43:42.890Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-03T04:43:43.833Z] Copying: 63336/1048576 [kB] (9960 kBps) [2024-11-03T04:43:44.773Z] Copying: 73/1024 [MB] (11 MBps) [2024-11-03T04:43:45.718Z] Copying: 86/1024 [MB] (12 MBps) [2024-11-03T04:43:47.101Z] Copying: 98/1024 [MB] (12 MBps) [2024-11-03T04:43:48.044Z] Copying: 118/1024 [MB] (20 MBps) [2024-11-03T04:43:48.987Z] Copying: 136/1024 [MB] (17 MBps) [2024-11-03T04:43:49.929Z] Copying: 151/1024 [MB] (15 MBps) [2024-11-03T04:43:50.874Z] Copying: 165/1024 [MB] (13 MBps) [2024-11-03T04:43:51.873Z] Copying: 180/1024 [MB] (15 MBps) [2024-11-03T04:43:52.816Z] Copying: 197/1024 [MB] (17 MBps) [2024-11-03T04:43:53.756Z] Copying: 212/1024 [MB] (15 MBps) [2024-11-03T04:43:55.131Z] Copying: 232/1024 [MB] (20 MBps) [2024-11-03T04:43:55.703Z] Copying: 266/1024 [MB] (33 MBps) [2024-11-03T04:43:57.091Z] Copying: 285/1024 [MB] (19 MBps) [2024-11-03T04:43:58.034Z] Copying: 301/1024 [MB] (16 MBps) [2024-11-03T04:43:58.982Z] Copying: 318/1024 [MB] (16 MBps) [2024-11-03T04:43:59.923Z] Copying: 332/1024 [MB] (14 MBps) [2024-11-03T04:44:00.862Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-03T04:44:01.800Z] Copying: 367/1024 [MB] (17 MBps) [2024-11-03T04:44:02.741Z] Copying: 383/1024 [MB] (15 MBps) [2024-11-03T04:44:04.125Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-03T04:44:05.058Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-03T04:44:05.991Z] Copying: 429/1024 [MB] (26 MBps) [2024-11-03T04:44:06.934Z] Copying: 464/1024 [MB] (35 MBps) [2024-11-03T04:44:07.876Z] Copying: 483/1024 [MB] (18 MBps) [2024-11-03T04:44:08.866Z] Copying: 497/1024 [MB] (13 MBps) [2024-11-03T04:44:09.801Z] Copying: 523/1024 [MB] (25 MBps) [2024-11-03T04:44:10.738Z] Copying: 557/1024 [MB] (33 MBps) [2024-11-03T04:44:12.124Z] Copying: 574/1024 [MB] (17 MBps) [2024-11-03T04:44:12.697Z] Copying: 588/1024 [MB] (13 MBps) [2024-11-03T04:44:14.081Z] Copying: 602/1024 [MB] (14 MBps) [2024-11-03T04:44:15.022Z] Copying: 618/1024 [MB] (16 MBps) [2024-11-03T04:44:15.961Z] Copying: 635/1024 [MB] (17 MBps) [2024-11-03T04:44:16.897Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-03T04:44:17.839Z] Copying: 683/1024 [MB] (29 MBps) [2024-11-03T04:44:18.780Z] Copying: 702/1024 [MB] (19 MBps) [2024-11-03T04:44:19.721Z] Copying: 719/1024 [MB] (17 MBps) [2024-11-03T04:44:21.106Z] Copying: 736/1024 [MB] (16 MBps) [2024-11-03T04:44:22.045Z] Copying: 755/1024 [MB] (19 MBps) [2024-11-03T04:44:22.985Z] Copying: 775/1024 [MB] (19 MBps) [2024-11-03T04:44:23.926Z] Copying: 794/1024 [MB] (19 MBps) [2024-11-03T04:44:24.865Z] Copying: 808/1024 [MB] (13 MBps) [2024-11-03T04:44:25.855Z] Copying: 824/1024 [MB] (15 MBps) [2024-11-03T04:44:26.809Z] Copying: 841/1024 [MB] (17 MBps) [2024-11-03T04:44:27.749Z] Copying: 859/1024 [MB] (18 MBps) [2024-11-03T04:44:29.134Z] Copying: 876/1024 [MB] (17 MBps) [2024-11-03T04:44:29.706Z] Copying: 895/1024 [MB] (19 MBps) [2024-11-03T04:44:31.092Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-03T04:44:32.039Z] Copying: 921/1024 [MB] (13 MBps) [2024-11-03T04:44:32.982Z] Copying: 936/1024 [MB] (14 MBps) [2024-11-03T04:44:33.926Z] Copying: 952/1024 [MB] (16 MBps) [2024-11-03T04:44:34.863Z] Copying: 969/1024 [MB] (17 MBps) [2024-11-03T04:44:35.802Z] Copying: 986/1024 [MB] (16 MBps) [2024-11-03T04:44:36.740Z] Copying: 1001/1024 [MB] (15 MBps) [2024-11-03T04:44:37.309Z] Copying: 1016/1024 [MB] (14 MBps) [2024-11-03T04:44:37.881Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:24:14.797 00:24:14.797 04:44:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:14.797 04:44:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:15.059 04:44:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:15.321 [2024-11-03 04:44:38.194906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.194944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:15.321 [2024-11-03 04:44:38.194956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:15.321 [2024-11-03 04:44:38.194966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.194990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:15.321 [2024-11-03 04:44:38.197615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.197647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:15.321 [2024-11-03 04:44:38.197658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:24:15.321 [2024-11-03 04:44:38.197665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.200115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.200144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:15.321 [2024-11-03 04:44:38.200155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.420 ms 00:24:15.321 [2024-11-03 04:44:38.200162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.220108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.220160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:15.321 [2024-11-03 04:44:38.220176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.916 ms 00:24:15.321 [2024-11-03 04:44:38.220188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.226447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.226478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:15.321 [2024-11-03 04:44:38.226492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.216 ms 00:24:15.321 [2024-11-03 04:44:38.226504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.252196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.252324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:15.321 [2024-11-03 04:44:38.252389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.598 ms 00:24:15.321 [2024-11-03 04:44:38.252413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.274859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.275016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:15.321 [2024-11-03 04:44:38.275101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.394 ms 00:24:15.321 [2024-11-03 04:44:38.275128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.321 [2024-11-03 04:44:38.275300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.321 [2024-11-03 04:44:38.275712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:15.321 [2024-11-03 04:44:38.275766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:24:15.321 [2024-11-03 04:44:38.275795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.322 [2024-11-03 04:44:38.300054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.322 [2024-11-03 04:44:38.300188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:15.322 [2024-11-03 04:44:38.300251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.202 ms 00:24:15.322 [2024-11-03 04:44:38.300274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.322 [2024-11-03 04:44:38.324237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.322 [2024-11-03 04:44:38.324367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:15.322 [2024-11-03 04:44:38.324427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:24:15.322 [2024-11-03 04:44:38.324438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.322 [2024-11-03 04:44:38.349068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.322 [2024-11-03 04:44:38.349229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:15.322 [2024-11-03 04:44:38.349301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.339 ms 00:24:15.322 [2024-11-03 04:44:38.349328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.322 [2024-11-03 04:44:38.373409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.322 [2024-11-03 04:44:38.373596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:15.322 [2024-11-03 04:44:38.373882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.974 ms 00:24:15.322 [2024-11-03 04:44:38.373926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.322 [2024-11-03 04:44:38.374043] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:15.322 [2024-11-03 04:44:38.374083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.374997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.375937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:15.322 [2024-11-03 04:44:38.376736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:15.323 [2024-11-03 04:44:38.376975] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:15.323 [2024-11-03 04:44:38.376987] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 91672ceb-899a-43cc-a534-26de9442945d 00:24:15.323 [2024-11-03 04:44:38.376996] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:15.323 [2024-11-03 04:44:38.377008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:15.323 [2024-11-03 04:44:38.377016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:15.323 [2024-11-03 04:44:38.377028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:15.323 [2024-11-03 04:44:38.377036] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:15.323 [2024-11-03 04:44:38.377049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:15.323 [2024-11-03 04:44:38.377057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:15.323 [2024-11-03 04:44:38.377067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:15.323 [2024-11-03 04:44:38.377073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:15.323 [2024-11-03 04:44:38.377083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.323 [2024-11-03 04:44:38.377093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:15.323 [2024-11-03 04:44:38.377104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:24:15.323 [2024-11-03 04:44:38.377112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.323 [2024-11-03 04:44:38.391184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.323 [2024-11-03 04:44:38.391229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:15.323 [2024-11-03 04:44:38.391243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.999 ms 00:24:15.323 [2024-11-03 04:44:38.391254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.323 [2024-11-03 04:44:38.391677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.323 [2024-11-03 04:44:38.391691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:15.323 [2024-11-03 04:44:38.391704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:24:15.323 [2024-11-03 04:44:38.391712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.584 [2024-11-03 04:44:38.438790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.584 [2024-11-03 04:44:38.438840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:15.584 [2024-11-03 04:44:38.438853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.584 [2024-11-03 04:44:38.438865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.584 [2024-11-03 04:44:38.438936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.584 [2024-11-03 04:44:38.438946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:15.584 [2024-11-03 04:44:38.438956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.584 [2024-11-03 04:44:38.438964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.584 [2024-11-03 04:44:38.439052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.584 [2024-11-03 04:44:38.439065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:15.584 [2024-11-03 04:44:38.439076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.584 [2024-11-03 04:44:38.439083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.584 [2024-11-03 04:44:38.439110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.584 [2024-11-03 04:44:38.439118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:15.584 [2024-11-03 04:44:38.439129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.584 [2024-11-03 04:44:38.439137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.584 [2024-11-03 04:44:38.524029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.584 [2024-11-03 04:44:38.524089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:15.585 [2024-11-03 04:44:38.524106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.524118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.593661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.593899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.585 [2024-11-03 04:44:38.593925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.593934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.594064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.585 [2024-11-03 04:44:38.594075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.594085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.594154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.585 [2024-11-03 04:44:38.594167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.594175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.594307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.585 [2024-11-03 04:44:38.594319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.594329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.594379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:15.585 [2024-11-03 04:44:38.594390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.594398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.594458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.585 [2024-11-03 04:44:38.594469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.594479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.585 [2024-11-03 04:44:38.594554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.585 [2024-11-03 04:44:38.594602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.585 [2024-11-03 04:44:38.594612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.585 [2024-11-03 04:44:38.594773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 399.812 ms, result 0 00:24:15.585 true 00:24:15.585 04:44:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77841 00:24:15.585 04:44:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77841 00:24:15.585 04:44:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:15.846 [2024-11-03 04:44:38.694937] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:24:15.846 [2024-11-03 04:44:38.695305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78696 ] 00:24:15.846 [2024-11-03 04:44:38.862000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.107 [2024-11-03 04:44:38.981070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.496  [2024-11-03T04:44:41.522Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-03T04:44:42.458Z] Copying: 377/1024 [MB] (190 MBps) [2024-11-03T04:44:43.399Z] Copying: 632/1024 [MB] (255 MBps) [2024-11-03T04:44:43.966Z] Copying: 889/1024 [MB] (256 MBps) [2024-11-03T04:44:44.533Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:24:21.449 00:24:21.449 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77841 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:21.449 04:44:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:21.449 [2024-11-03 04:44:44.388416] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:24:21.449 [2024-11-03 04:44:44.388547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78758 ] 00:24:21.708 [2024-11-03 04:44:44.546947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.708 [2024-11-03 04:44:44.620212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.966 [2024-11-03 04:44:44.826335] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.966 [2024-11-03 04:44:44.826375] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.966 [2024-11-03 04:44:44.888923] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:21.966 [2024-11-03 04:44:44.889112] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:21.966 [2024-11-03 04:44:44.889326] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:22.226 [2024-11-03 04:44:45.125962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.125992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.226 [2024-11-03 04:44:45.126002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.226 [2024-11-03 04:44:45.126009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.126043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.126050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.226 [2024-11-03 04:44:45.126056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:22.226 [2024-11-03 04:44:45.126062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.126074] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.226 [2024-11-03 04:44:45.126631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.226 [2024-11-03 04:44:45.126644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.126650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.226 [2024-11-03 04:44:45.126657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:24:22.226 [2024-11-03 04:44:45.126663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.127589] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:22.226 [2024-11-03 04:44:45.137347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.137374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:22.226 [2024-11-03 04:44:45.137383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.759 ms 00:24:22.226 [2024-11-03 04:44:45.137389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.137431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.137438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:22.226 [2024-11-03 04:44:45.137445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:22.226 [2024-11-03 04:44:45.137450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.141745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.141767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.226 [2024-11-03 04:44:45.141774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:24:22.226 [2024-11-03 04:44:45.141780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.141833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.141840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.226 [2024-11-03 04:44:45.141846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:22.226 [2024-11-03 04:44:45.141852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.141888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.141898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.226 [2024-11-03 04:44:45.141904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:22.226 [2024-11-03 04:44:45.141910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.141924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.226 [2024-11-03 04:44:45.144553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.144588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.226 [2024-11-03 04:44:45.144609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:24:22.226 [2024-11-03 04:44:45.144615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.226 [2024-11-03 04:44:45.144641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.226 [2024-11-03 04:44:45.144648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.226 [2024-11-03 04:44:45.144654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:22.227 [2024-11-03 04:44:45.144660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.227 [2024-11-03 04:44:45.144674] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:22.227 [2024-11-03 04:44:45.144690] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:22.227 [2024-11-03 04:44:45.144716] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:22.227 [2024-11-03 04:44:45.144729] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:22.227 [2024-11-03 04:44:45.144807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:22.227 [2024-11-03 04:44:45.144815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.227 [2024-11-03 04:44:45.144823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:22.227 [2024-11-03 04:44:45.144830] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.227 [2024-11-03 04:44:45.144839] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.227 [2024-11-03 04:44:45.144847] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.227 [2024-11-03 04:44:45.144852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.227 [2024-11-03 04:44:45.144858] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:22.227 [2024-11-03 04:44:45.144863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:22.227 [2024-11-03 04:44:45.144869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.227 [2024-11-03 04:44:45.144875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.227 [2024-11-03 04:44:45.144881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:24:22.227 [2024-11-03 04:44:45.144886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.227 [2024-11-03 04:44:45.144949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.227 [2024-11-03 04:44:45.144955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.227 [2024-11-03 04:44:45.144963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:22.227 [2024-11-03 04:44:45.144969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.227 [2024-11-03 04:44:45.145043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.227 [2024-11-03 04:44:45.145051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.227 [2024-11-03 04:44:45.145057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.227 [2024-11-03 04:44:45.145074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.227 [2024-11-03 04:44:45.145091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.227 [2024-11-03 04:44:45.145103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.227 [2024-11-03 04:44:45.145113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.227 [2024-11-03 04:44:45.145118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.227 [2024-11-03 04:44:45.145123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.227 [2024-11-03 04:44:45.145128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:22.227 [2024-11-03 04:44:45.145133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.227 [2024-11-03 04:44:45.145142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.227 [2024-11-03 04:44:45.145158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.227 [2024-11-03 04:44:45.145172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.227 [2024-11-03 04:44:45.145188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.227 [2024-11-03 04:44:45.145202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.227 [2024-11-03 04:44:45.145216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.227 [2024-11-03 04:44:45.145226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.227 [2024-11-03 04:44:45.145230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:22.227 [2024-11-03 04:44:45.145235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.227 [2024-11-03 04:44:45.145240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:22.227 [2024-11-03 04:44:45.145245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:22.227 [2024-11-03 04:44:45.145250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:22.227 [2024-11-03 04:44:45.145259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:22.227 [2024-11-03 04:44:45.145266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.227 [2024-11-03 04:44:45.145277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.227 [2024-11-03 04:44:45.145283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.227 [2024-11-03 04:44:45.145296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.227 [2024-11-03 04:44:45.145301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.227 [2024-11-03 04:44:45.145306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.227 [2024-11-03 04:44:45.145311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.227 [2024-11-03 04:44:45.145315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.227 [2024-11-03 04:44:45.145320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.227 [2024-11-03 04:44:45.145326] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.227 [2024-11-03 04:44:45.145333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.227 [2024-11-03 04:44:45.145346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:22.227 [2024-11-03 04:44:45.145351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:22.227 [2024-11-03 04:44:45.145356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:22.227 [2024-11-03 04:44:45.145362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:22.227 [2024-11-03 04:44:45.145367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:22.227 [2024-11-03 04:44:45.145372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:22.227 [2024-11-03 04:44:45.145378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:22.227 [2024-11-03 04:44:45.145384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:22.227 [2024-11-03 04:44:45.145389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:22.227 [2024-11-03 04:44:45.145415] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.227 [2024-11-03 04:44:45.145421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.227 [2024-11-03 04:44:45.145434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.227 [2024-11-03 04:44:45.145439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.227 [2024-11-03 04:44:45.145445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.227 [2024-11-03 04:44:45.145454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.145460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.228 [2024-11-03 04:44:45.145466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:24:22.228 [2024-11-03 04:44:45.145471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.166035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.166060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.228 [2024-11-03 04:44:45.166068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.532 ms 00:24:22.228 [2024-11-03 04:44:45.166074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.166136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.166145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:22.228 [2024-11-03 04:44:45.166151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:22.228 [2024-11-03 04:44:45.166156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.203542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.203580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.228 [2024-11-03 04:44:45.203590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.347 ms 00:24:22.228 [2024-11-03 04:44:45.203599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.203634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.203641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.228 [2024-11-03 04:44:45.203648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:22.228 [2024-11-03 04:44:45.203654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.203997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.204010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.228 [2024-11-03 04:44:45.204018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:24:22.228 [2024-11-03 04:44:45.204024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.204123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.204129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.228 [2024-11-03 04:44:45.204135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:22.228 [2024-11-03 04:44:45.204142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.214780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.214801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.228 [2024-11-03 04:44:45.214810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.621 ms 00:24:22.228 [2024-11-03 04:44:45.214816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.225053] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:22.228 [2024-11-03 04:44:45.225083] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:22.228 [2024-11-03 04:44:45.225095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.225102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:22.228 [2024-11-03 04:44:45.225110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.207 ms 00:24:22.228 [2024-11-03 04:44:45.225116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.243815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.243842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:22.228 [2024-11-03 04:44:45.243859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.666 ms 00:24:22.228 [2024-11-03 04:44:45.243867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.252917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.252941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:22.228 [2024-11-03 04:44:45.252949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.018 ms 00:24:22.228 [2024-11-03 04:44:45.252955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.262513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.262535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:22.228 [2024-11-03 04:44:45.262543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.532 ms 00:24:22.228 [2024-11-03 04:44:45.262549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.263033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.263045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:22.228 [2024-11-03 04:44:45.263052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:24:22.228 [2024-11-03 04:44:45.263058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.228 [2024-11-03 04:44:45.307095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.228 [2024-11-03 04:44:45.307134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:22.228 [2024-11-03 04:44:45.307145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.023 ms 00:24:22.228 [2024-11-03 04:44:45.307151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.315317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:22.486 [2024-11-03 04:44:45.317523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.317546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.486 [2024-11-03 04:44:45.317555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.334 ms 00:24:22.486 [2024-11-03 04:44:45.317574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.317644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.317655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:22.486 [2024-11-03 04:44:45.317664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:22.486 [2024-11-03 04:44:45.317670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.317720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.317729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.486 [2024-11-03 04:44:45.317736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:22.486 [2024-11-03 04:44:45.317742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.317758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.317765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.486 [2024-11-03 04:44:45.317774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.486 [2024-11-03 04:44:45.317781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.317805] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:22.486 [2024-11-03 04:44:45.317814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.317820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:22.486 [2024-11-03 04:44:45.317827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:22.486 [2024-11-03 04:44:45.317834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.336229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.336266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.486 [2024-11-03 04:44:45.336275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.380 ms 00:24:22.486 [2024-11-03 04:44:45.336282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.336341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.486 [2024-11-03 04:44:45.336349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.486 [2024-11-03 04:44:45.336356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:22.486 [2024-11-03 04:44:45.336362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.486 [2024-11-03 04:44:45.337417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.106 ms, result 0 00:24:23.422  [2024-11-03T04:44:47.450Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-03T04:44:48.394Z] Copying: 65/1024 [MB] (18 MBps) [2024-11-03T04:44:49.786Z] Copying: 80/1024 [MB] (14 MBps) [2024-11-03T04:44:50.360Z] Copying: 98/1024 [MB] (18 MBps) [2024-11-03T04:44:51.741Z] Copying: 116/1024 [MB] (17 MBps) [2024-11-03T04:44:52.680Z] Copying: 128/1024 [MB] (12 MBps) [2024-11-03T04:44:53.620Z] Copying: 142/1024 [MB] (13 MBps) [2024-11-03T04:44:54.562Z] Copying: 156/1024 [MB] (14 MBps) [2024-11-03T04:44:55.507Z] Copying: 175/1024 [MB] (18 MBps) [2024-11-03T04:44:56.450Z] Copying: 193/1024 [MB] (18 MBps) [2024-11-03T04:44:57.392Z] Copying: 206/1024 [MB] (12 MBps) [2024-11-03T04:44:58.780Z] Copying: 226/1024 [MB] (19 MBps) [2024-11-03T04:44:59.364Z] Copying: 244/1024 [MB] (17 MBps) [2024-11-03T04:45:00.373Z] Copying: 262/1024 [MB] (18 MBps) [2024-11-03T04:45:01.756Z] Copying: 278/1024 [MB] (16 MBps) [2024-11-03T04:45:02.697Z] Copying: 294/1024 [MB] (15 MBps) [2024-11-03T04:45:03.632Z] Copying: 308/1024 [MB] (14 MBps) [2024-11-03T04:45:04.573Z] Copying: 336/1024 [MB] (27 MBps) [2024-11-03T04:45:05.518Z] Copying: 365/1024 [MB] (29 MBps) [2024-11-03T04:45:06.460Z] Copying: 387/1024 [MB] (21 MBps) [2024-11-03T04:45:07.399Z] Copying: 401/1024 [MB] (13 MBps) [2024-11-03T04:45:08.784Z] Copying: 436/1024 [MB] (35 MBps) [2024-11-03T04:45:09.357Z] Copying: 450/1024 [MB] (14 MBps) [2024-11-03T04:45:10.744Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-03T04:45:11.685Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-03T04:45:12.626Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-03T04:45:13.567Z] Copying: 499/1024 [MB] (16 MBps) [2024-11-03T04:45:14.508Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-03T04:45:15.453Z] Copying: 538/1024 [MB] (22 MBps) [2024-11-03T04:45:16.397Z] Copying: 558/1024 [MB] (19 MBps) [2024-11-03T04:45:17.361Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-03T04:45:18.746Z] Copying: 597/1024 [MB] (21 MBps) [2024-11-03T04:45:19.694Z] Copying: 616/1024 [MB] (19 MBps) [2024-11-03T04:45:20.637Z] Copying: 633/1024 [MB] (16 MBps) [2024-11-03T04:45:21.583Z] Copying: 656/1024 [MB] (22 MBps) [2024-11-03T04:45:22.527Z] Copying: 676/1024 [MB] (19 MBps) [2024-11-03T04:45:23.474Z] Copying: 692/1024 [MB] (16 MBps) [2024-11-03T04:45:24.419Z] Copying: 712/1024 [MB] (19 MBps) [2024-11-03T04:45:25.364Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-03T04:45:26.750Z] Copying: 739/1024 [MB] (15 MBps) [2024-11-03T04:45:27.690Z] Copying: 753/1024 [MB] (13 MBps) [2024-11-03T04:45:28.637Z] Copying: 767/1024 [MB] (14 MBps) [2024-11-03T04:45:29.579Z] Copying: 786/1024 [MB] (18 MBps) [2024-11-03T04:45:30.532Z] Copying: 803/1024 [MB] (17 MBps) [2024-11-03T04:45:31.478Z] Copying: 820/1024 [MB] (16 MBps) [2024-11-03T04:45:32.424Z] Copying: 836/1024 [MB] (16 MBps) [2024-11-03T04:45:33.368Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-03T04:45:34.785Z] Copying: 859/1024 [MB] (12 MBps) [2024-11-03T04:45:35.360Z] Copying: 869/1024 [MB] (10 MBps) [2024-11-03T04:45:36.750Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-03T04:45:37.694Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-03T04:45:38.640Z] Copying: 906/1024 [MB] (16 MBps) [2024-11-03T04:45:39.586Z] Copying: 921/1024 [MB] (15 MBps) [2024-11-03T04:45:40.531Z] Copying: 933/1024 [MB] (11 MBps) [2024-11-03T04:45:41.477Z] Copying: 945/1024 [MB] (12 MBps) [2024-11-03T04:45:42.422Z] Copying: 957/1024 [MB] (11 MBps) [2024-11-03T04:45:43.368Z] Copying: 968/1024 [MB] (11 MBps) [2024-11-03T04:45:44.754Z] Copying: 981/1024 [MB] (12 MBps) [2024-11-03T04:45:45.699Z] Copying: 994/1024 [MB] (13 MBps) [2024-11-03T04:45:46.645Z] Copying: 1014/1024 [MB] (19 MBps) [2024-11-03T04:45:46.645Z] Copying: 1048112/1048576 [kB] (9752 kBps) [2024-11-03T04:45:46.645Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-03 04:45:46.613946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.561 [2024-11-03 04:45:46.614199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:23.561 [2024-11-03 04:45:46.614217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:23.561 [2024-11-03 04:45:46.614224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.561 [2024-11-03 04:45:46.616898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:23.561 [2024-11-03 04:45:46.619698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.561 [2024-11-03 04:45:46.619726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:23.561 [2024-11-03 04:45:46.619736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:25:23.561 [2024-11-03 04:45:46.619743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.561 [2024-11-03 04:45:46.629622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.561 [2024-11-03 04:45:46.629655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:23.561 [2024-11-03 04:45:46.629663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.592 ms 00:25:23.561 [2024-11-03 04:45:46.629669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.822 [2024-11-03 04:45:46.647855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.822 [2024-11-03 04:45:46.647883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:23.822 [2024-11-03 04:45:46.647892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.173 ms 00:25:23.822 [2024-11-03 04:45:46.647899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.822 [2024-11-03 04:45:46.652476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.822 [2024-11-03 04:45:46.652499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:23.822 [2024-11-03 04:45:46.652512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.558 ms 00:25:23.823 [2024-11-03 04:45:46.652519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.823 [2024-11-03 04:45:46.672095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.823 [2024-11-03 04:45:46.672122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:23.823 [2024-11-03 04:45:46.672131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.547 ms 00:25:23.823 [2024-11-03 04:45:46.672137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.823 [2024-11-03 04:45:46.684239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.823 [2024-11-03 04:45:46.684266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.823 [2024-11-03 04:45:46.684276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.074 ms 00:25:23.823 [2024-11-03 04:45:46.684283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.823 [2024-11-03 04:45:46.888916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.823 [2024-11-03 04:45:46.888952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.823 [2024-11-03 04:45:46.888961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.603 ms 00:25:23.823 [2024-11-03 04:45:46.888972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.085 [2024-11-03 04:45:46.907905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.085 [2024-11-03 04:45:46.907932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:24.085 [2024-11-03 04:45:46.907940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.921 ms 00:25:24.085 [2024-11-03 04:45:46.907945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.085 [2024-11-03 04:45:46.926433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.085 [2024-11-03 04:45:46.926459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:24.085 [2024-11-03 04:45:46.926467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.461 ms 00:25:24.085 [2024-11-03 04:45:46.926472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.085 [2024-11-03 04:45:46.944679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.085 [2024-11-03 04:45:46.944704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:24.085 [2024-11-03 04:45:46.944711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.180 ms 00:25:24.085 [2024-11-03 04:45:46.944717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.085 [2024-11-03 04:45:46.962812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.085 [2024-11-03 04:45:46.962837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:24.085 [2024-11-03 04:45:46.962845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.972 ms 00:25:24.085 [2024-11-03 04:45:46.962850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.085 [2024-11-03 04:45:46.962876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:24.085 [2024-11-03 04:45:46.962888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 81152 / 261120 wr_cnt: 1 state: open 00:25:24.085 [2024-11-03 04:45:46.962897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.962999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:24.085 [2024-11-03 04:45:46.963193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:24.086 [2024-11-03 04:45:46.963492] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:24.086 [2024-11-03 04:45:46.963498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 91672ceb-899a-43cc-a534-26de9442945d 00:25:24.086 [2024-11-03 04:45:46.963504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 81152 00:25:24.086 [2024-11-03 04:45:46.963509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 82112 00:25:24.086 [2024-11-03 04:45:46.963523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 81152 00:25:24.086 [2024-11-03 04:45:46.963529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0118 00:25:24.086 [2024-11-03 04:45:46.963534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:24.086 [2024-11-03 04:45:46.963540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:24.086 [2024-11-03 04:45:46.963546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:24.086 [2024-11-03 04:45:46.963550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:24.086 [2024-11-03 04:45:46.963555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:24.086 [2024-11-03 04:45:46.963570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.086 [2024-11-03 04:45:46.963576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:24.086 [2024-11-03 04:45:46.963584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:25:24.086 [2024-11-03 04:45:46.963590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:46.973567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.086 [2024-11-03 04:45:46.973591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:24.086 [2024-11-03 04:45:46.973599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.965 ms 00:25:24.086 [2024-11-03 04:45:46.973605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:46.973894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.086 [2024-11-03 04:45:46.973902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:24.086 [2024-11-03 04:45:46.973908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:25:24.086 [2024-11-03 04:45:46.973914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.001144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.001173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.086 [2024-11-03 04:45:47.001182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.001188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.001228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.001235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.086 [2024-11-03 04:45:47.001241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.001247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.001293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.001300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.086 [2024-11-03 04:45:47.001306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.001312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.001324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.001330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.086 [2024-11-03 04:45:47.001338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.001344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.065156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.065190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.086 [2024-11-03 04:45:47.065200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.065207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.117002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.117038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.086 [2024-11-03 04:45:47.117048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.117055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.086 [2024-11-03 04:45:47.117122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.086 [2024-11-03 04:45:47.117134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.086 [2024-11-03 04:45:47.117142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.086 [2024-11-03 04:45:47.117148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.087 [2024-11-03 04:45:47.117176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.087 [2024-11-03 04:45:47.117184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.087 [2024-11-03 04:45:47.117190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.087 [2024-11-03 04:45:47.117197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.087 [2024-11-03 04:45:47.117274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.087 [2024-11-03 04:45:47.117286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.087 [2024-11-03 04:45:47.117293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.087 [2024-11-03 04:45:47.117299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.087 [2024-11-03 04:45:47.117322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.087 [2024-11-03 04:45:47.117330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:24.087 [2024-11-03 04:45:47.117337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.087 [2024-11-03 04:45:47.117343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.087 [2024-11-03 04:45:47.117375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.087 [2024-11-03 04:45:47.117382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.087 [2024-11-03 04:45:47.117391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.087 [2024-11-03 04:45:47.117398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.087 [2024-11-03 04:45:47.117435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.087 [2024-11-03 04:45:47.117443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.087 [2024-11-03 04:45:47.117449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.087 [2024-11-03 04:45:47.117456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.087 [2024-11-03 04:45:47.117606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 504.645 ms, result 0 00:25:25.472 00:25:25.472 00:25:25.472 04:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:27.388 04:45:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:27.388 [2024-11-03 04:45:50.445583] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:25:27.388 [2024-11-03 04:45:50.445702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79428 ] 00:25:27.647 [2024-11-03 04:45:50.602960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.647 [2024-11-03 04:45:50.698867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.907 [2024-11-03 04:45:50.929360] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:27.907 [2024-11-03 04:45:50.929413] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.173 [2024-11-03 04:45:51.086101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.173 [2024-11-03 04:45:51.086139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:28.173 [2024-11-03 04:45:51.086154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.173 [2024-11-03 04:45:51.086160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.173 [2024-11-03 04:45:51.086198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.173 [2024-11-03 04:45:51.086206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.173 [2024-11-03 04:45:51.086215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:28.173 [2024-11-03 04:45:51.086222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.173 [2024-11-03 04:45:51.086234] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:28.173 [2024-11-03 04:45:51.086775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:28.173 [2024-11-03 04:45:51.086796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.173 [2024-11-03 04:45:51.086803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.173 [2024-11-03 04:45:51.086810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:25:28.173 [2024-11-03 04:45:51.086816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.173 [2024-11-03 04:45:51.088145] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:28.173 [2024-11-03 04:45:51.098761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.098791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:28.174 [2024-11-03 04:45:51.098800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.619 ms 00:25:28.174 [2024-11-03 04:45:51.098806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.098858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.098868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:28.174 [2024-11-03 04:45:51.098875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:28.174 [2024-11-03 04:45:51.098881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.105253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.105279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.174 [2024-11-03 04:45:51.105286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.332 ms 00:25:28.174 [2024-11-03 04:45:51.105293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.105356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.105363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.174 [2024-11-03 04:45:51.105370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:28.174 [2024-11-03 04:45:51.105376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.105415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.105423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:28.174 [2024-11-03 04:45:51.105430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.174 [2024-11-03 04:45:51.105436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.105452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:28.174 [2024-11-03 04:45:51.108453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.108477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.174 [2024-11-03 04:45:51.108485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:25:28.174 [2024-11-03 04:45:51.108493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.108521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.108529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:28.174 [2024-11-03 04:45:51.108536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.174 [2024-11-03 04:45:51.108542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.108569] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:28.174 [2024-11-03 04:45:51.108592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:28.174 [2024-11-03 04:45:51.108622] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:28.174 [2024-11-03 04:45:51.108637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:28.174 [2024-11-03 04:45:51.108723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:28.174 [2024-11-03 04:45:51.108732] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:28.174 [2024-11-03 04:45:51.108740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:28.174 [2024-11-03 04:45:51.108749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:28.174 [2024-11-03 04:45:51.108756] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:28.174 [2024-11-03 04:45:51.108763] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:28.174 [2024-11-03 04:45:51.108769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:28.174 [2024-11-03 04:45:51.108775] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:28.174 [2024-11-03 04:45:51.108781] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:28.174 [2024-11-03 04:45:51.108789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.108795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:28.174 [2024-11-03 04:45:51.108801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:25:28.174 [2024-11-03 04:45:51.108807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.108871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.174 [2024-11-03 04:45:51.108879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:28.174 [2024-11-03 04:45:51.108885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:28.174 [2024-11-03 04:45:51.108891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.174 [2024-11-03 04:45:51.108968] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:28.174 [2024-11-03 04:45:51.108984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:28.174 [2024-11-03 04:45:51.108991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.174 [2024-11-03 04:45:51.108998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:28.174 [2024-11-03 04:45:51.109011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:28.174 [2024-11-03 04:45:51.109027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.174 [2024-11-03 04:45:51.109038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:28.174 [2024-11-03 04:45:51.109044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:28.174 [2024-11-03 04:45:51.109050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.174 [2024-11-03 04:45:51.109058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:28.174 [2024-11-03 04:45:51.109064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:28.174 [2024-11-03 04:45:51.109074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:28.174 [2024-11-03 04:45:51.109084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:28.174 [2024-11-03 04:45:51.109100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:28.174 [2024-11-03 04:45:51.109114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:28.174 [2024-11-03 04:45:51.109128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:28.174 [2024-11-03 04:45:51.109143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:28.174 [2024-11-03 04:45:51.109158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.174 [2024-11-03 04:45:51.109167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:28.174 [2024-11-03 04:45:51.109172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:28.174 [2024-11-03 04:45:51.109177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.174 [2024-11-03 04:45:51.109182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:28.174 [2024-11-03 04:45:51.109187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:28.174 [2024-11-03 04:45:51.109191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:28.174 [2024-11-03 04:45:51.109203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:28.174 [2024-11-03 04:45:51.109207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109213] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:28.174 [2024-11-03 04:45:51.109220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:28.174 [2024-11-03 04:45:51.109226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.174 [2024-11-03 04:45:51.109237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:28.174 [2024-11-03 04:45:51.109243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:28.174 [2024-11-03 04:45:51.109249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:28.174 [2024-11-03 04:45:51.109254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:28.174 [2024-11-03 04:45:51.109259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:28.174 [2024-11-03 04:45:51.109265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:28.174 [2024-11-03 04:45:51.109271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:28.174 [2024-11-03 04:45:51.109278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:28.175 [2024-11-03 04:45:51.109289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:28.175 [2024-11-03 04:45:51.109296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:28.175 [2024-11-03 04:45:51.109301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:28.175 [2024-11-03 04:45:51.109306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:28.175 [2024-11-03 04:45:51.109312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:28.175 [2024-11-03 04:45:51.109317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:28.175 [2024-11-03 04:45:51.109322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:28.175 [2024-11-03 04:45:51.109328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:28.175 [2024-11-03 04:45:51.109333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:28.175 [2024-11-03 04:45:51.109360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:28.175 [2024-11-03 04:45:51.109366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:28.175 [2024-11-03 04:45:51.109379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:28.175 [2024-11-03 04:45:51.109384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:28.175 [2024-11-03 04:45:51.109390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:28.175 [2024-11-03 04:45:51.109400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.109413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:28.175 [2024-11-03 04:45:51.109419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:25:28.175 [2024-11-03 04:45:51.109424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.133891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.133919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.175 [2024-11-03 04:45:51.133927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.423 ms 00:25:28.175 [2024-11-03 04:45:51.133935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.134001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.134010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:28.175 [2024-11-03 04:45:51.134017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:28.175 [2024-11-03 04:45:51.134023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.188078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.188110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.175 [2024-11-03 04:45:51.188120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.016 ms 00:25:28.175 [2024-11-03 04:45:51.188127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.188160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.188168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.175 [2024-11-03 04:45:51.188175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:28.175 [2024-11-03 04:45:51.188184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.188642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.188662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.175 [2024-11-03 04:45:51.188670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:25:28.175 [2024-11-03 04:45:51.188677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.188790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.188798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.175 [2024-11-03 04:45:51.188806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:28.175 [2024-11-03 04:45:51.188811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.200886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.200914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.175 [2024-11-03 04:45:51.200922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.053 ms 00:25:28.175 [2024-11-03 04:45:51.200930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.211748] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:28.175 [2024-11-03 04:45:51.211777] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:28.175 [2024-11-03 04:45:51.211787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.211794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:28.175 [2024-11-03 04:45:51.211801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.779 ms 00:25:28.175 [2024-11-03 04:45:51.211806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.231390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.231421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:28.175 [2024-11-03 04:45:51.231431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.550 ms 00:25:28.175 [2024-11-03 04:45:51.231437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.241015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.241047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:28.175 [2024-11-03 04:45:51.241055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.546 ms 00:25:28.175 [2024-11-03 04:45:51.241061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.250089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.250115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:28.175 [2024-11-03 04:45:51.250123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.000 ms 00:25:28.175 [2024-11-03 04:45:51.250129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.175 [2024-11-03 04:45:51.250607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.175 [2024-11-03 04:45:51.250627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:28.175 [2024-11-03 04:45:51.250634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:25:28.175 [2024-11-03 04:45:51.250641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.300162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.300195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:28.472 [2024-11-03 04:45:51.300206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.506 ms 00:25:28.472 [2024-11-03 04:45:51.300217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.308513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:28.472 [2024-11-03 04:45:51.310705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.310729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:28.472 [2024-11-03 04:45:51.310738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.456 ms 00:25:28.472 [2024-11-03 04:45:51.310745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.310816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.310825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:28.472 [2024-11-03 04:45:51.310832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.472 [2024-11-03 04:45:51.310838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.312005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.312031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:28.472 [2024-11-03 04:45:51.312039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:25:28.472 [2024-11-03 04:45:51.312045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.312064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.312071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:28.472 [2024-11-03 04:45:51.312078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.472 [2024-11-03 04:45:51.312084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.312116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:28.472 [2024-11-03 04:45:51.312125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.312132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:28.472 [2024-11-03 04:45:51.312138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.472 [2024-11-03 04:45:51.312145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.330830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.330858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:28.472 [2024-11-03 04:45:51.330868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.670 ms 00:25:28.472 [2024-11-03 04:45:51.330875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.330937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.472 [2024-11-03 04:45:51.330946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:28.472 [2024-11-03 04:45:51.330953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:28.472 [2024-11-03 04:45:51.330960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.472 [2024-11-03 04:45:51.331891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 245.395 ms, result 0 00:25:29.415  [2024-11-03T04:45:53.887Z] Copying: 1556/1048576 [kB] (1556 kBps) [2024-11-03T04:45:54.831Z] Copying: 4632/1048576 [kB] (3076 kBps) [2024-11-03T04:45:55.776Z] Copying: 20/1024 [MB] (15 MBps) [2024-11-03T04:45:56.721Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-03T04:45:57.663Z] Copying: 62/1024 [MB] (21 MBps) [2024-11-03T04:45:58.606Z] Copying: 84/1024 [MB] (22 MBps) [2024-11-03T04:45:59.547Z] Copying: 102/1024 [MB] (18 MBps) [2024-11-03T04:46:00.490Z] Copying: 119/1024 [MB] (16 MBps) [2024-11-03T04:46:01.876Z] Copying: 137/1024 [MB] (17 MBps) [2024-11-03T04:46:02.820Z] Copying: 154/1024 [MB] (17 MBps) [2024-11-03T04:46:03.764Z] Copying: 172/1024 [MB] (17 MBps) [2024-11-03T04:46:04.709Z] Copying: 190/1024 [MB] (17 MBps) [2024-11-03T04:46:05.652Z] Copying: 206/1024 [MB] (16 MBps) [2024-11-03T04:46:06.596Z] Copying: 223/1024 [MB] (16 MBps) [2024-11-03T04:46:07.539Z] Copying: 239/1024 [MB] (16 MBps) [2024-11-03T04:46:08.479Z] Copying: 274/1024 [MB] (35 MBps) [2024-11-03T04:46:09.898Z] Copying: 293/1024 [MB] (18 MBps) [2024-11-03T04:46:10.839Z] Copying: 309/1024 [MB] (16 MBps) [2024-11-03T04:46:11.782Z] Copying: 325/1024 [MB] (15 MBps) [2024-11-03T04:46:12.725Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-03T04:46:13.667Z] Copying: 360/1024 [MB] (17 MBps) [2024-11-03T04:46:14.612Z] Copying: 378/1024 [MB] (17 MBps) [2024-11-03T04:46:15.554Z] Copying: 394/1024 [MB] (16 MBps) [2024-11-03T04:46:16.501Z] Copying: 410/1024 [MB] (15 MBps) [2024-11-03T04:46:17.889Z] Copying: 425/1024 [MB] (15 MBps) [2024-11-03T04:46:18.831Z] Copying: 441/1024 [MB] (15 MBps) [2024-11-03T04:46:19.773Z] Copying: 456/1024 [MB] (15 MBps) [2024-11-03T04:46:20.717Z] Copying: 474/1024 [MB] (17 MBps) [2024-11-03T04:46:21.661Z] Copying: 491/1024 [MB] (17 MBps) [2024-11-03T04:46:22.606Z] Copying: 508/1024 [MB] (17 MBps) [2024-11-03T04:46:23.551Z] Copying: 526/1024 [MB] (17 MBps) [2024-11-03T04:46:24.495Z] Copying: 544/1024 [MB] (17 MBps) [2024-11-03T04:46:25.886Z] Copying: 561/1024 [MB] (17 MBps) [2024-11-03T04:46:26.487Z] Copying: 579/1024 [MB] (17 MBps) [2024-11-03T04:46:27.874Z] Copying: 597/1024 [MB] (18 MBps) [2024-11-03T04:46:28.817Z] Copying: 615/1024 [MB] (18 MBps) [2024-11-03T04:46:29.769Z] Copying: 633/1024 [MB] (17 MBps) [2024-11-03T04:46:30.720Z] Copying: 648/1024 [MB] (15 MBps) [2024-11-03T04:46:31.664Z] Copying: 666/1024 [MB] (17 MBps) [2024-11-03T04:46:32.608Z] Copying: 684/1024 [MB] (17 MBps) [2024-11-03T04:46:33.552Z] Copying: 701/1024 [MB] (17 MBps) [2024-11-03T04:46:34.495Z] Copying: 718/1024 [MB] (17 MBps) [2024-11-03T04:46:35.883Z] Copying: 736/1024 [MB] (17 MBps) [2024-11-03T04:46:36.826Z] Copying: 754/1024 [MB] (17 MBps) [2024-11-03T04:46:37.770Z] Copying: 772/1024 [MB] (18 MBps) [2024-11-03T04:46:38.715Z] Copying: 792/1024 [MB] (20 MBps) [2024-11-03T04:46:39.660Z] Copying: 811/1024 [MB] (18 MBps) [2024-11-03T04:46:40.604Z] Copying: 828/1024 [MB] (16 MBps) [2024-11-03T04:46:41.549Z] Copying: 849/1024 [MB] (20 MBps) [2024-11-03T04:46:42.494Z] Copying: 866/1024 [MB] (17 MBps) [2024-11-03T04:46:43.908Z] Copying: 883/1024 [MB] (16 MBps) [2024-11-03T04:46:44.481Z] Copying: 898/1024 [MB] (15 MBps) [2024-11-03T04:46:45.869Z] Copying: 914/1024 [MB] (15 MBps) [2024-11-03T04:46:46.813Z] Copying: 929/1024 [MB] (15 MBps) [2024-11-03T04:46:47.757Z] Copying: 945/1024 [MB] (15 MBps) [2024-11-03T04:46:48.700Z] Copying: 964/1024 [MB] (19 MBps) [2024-11-03T04:46:49.645Z] Copying: 982/1024 [MB] (18 MBps) [2024-11-03T04:46:50.588Z] Copying: 1000/1024 [MB] (18 MBps) [2024-11-03T04:46:50.850Z] Copying: 1017/1024 [MB] (16 MBps) [2024-11-03T04:46:51.112Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-03 04:46:50.924244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.924323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.028 [2024-11-03 04:46:50.924348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:28.028 [2024-11-03 04:46:50.924359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.924387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.028 [2024-11-03 04:46:50.928041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.928089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.028 [2024-11-03 04:46:50.928102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.635 ms 00:26:28.028 [2024-11-03 04:46:50.928111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.928394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.928407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.028 [2024-11-03 04:46:50.928418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:26:28.028 [2024-11-03 04:46:50.928432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.944376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.944430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.028 [2024-11-03 04:46:50.944444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:26:28.028 [2024-11-03 04:46:50.944452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.950739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.950779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.028 [2024-11-03 04:46:50.950793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:26:28.028 [2024-11-03 04:46:50.950810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.976033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.976067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.028 [2024-11-03 04:46:50.976077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.163 ms 00:26:28.028 [2024-11-03 04:46:50.976084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.990381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.990413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.028 [2024-11-03 04:46:50.990425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.263 ms 00:26:28.028 [2024-11-03 04:46:50.990433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:50.995076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:50.995119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.028 [2024-11-03 04:46:50.995129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:26:28.028 [2024-11-03 04:46:50.995137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:51.018971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:51.019004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:28.028 [2024-11-03 04:46:51.019014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.820 ms 00:26:28.028 [2024-11-03 04:46:51.019022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:51.042330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:51.042364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:28.028 [2024-11-03 04:46:51.042384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.275 ms 00:26:28.028 [2024-11-03 04:46:51.042391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:51.065789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:51.065819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:28.028 [2024-11-03 04:46:51.065829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.363 ms 00:26:28.028 [2024-11-03 04:46:51.065837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:51.088684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.028 [2024-11-03 04:46:51.088714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:28.028 [2024-11-03 04:46:51.088725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.796 ms 00:26:28.028 [2024-11-03 04:46:51.088732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.028 [2024-11-03 04:46:51.088764] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:28.028 [2024-11-03 04:46:51.088777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:28.028 [2024-11-03 04:46:51.088787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:28.028 [2024-11-03 04:46:51.088796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.028 [2024-11-03 04:46:51.088916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.088999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:28.029 [2024-11-03 04:46:51.089527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:28.029 [2024-11-03 04:46:51.089535] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 91672ceb-899a-43cc-a534-26de9442945d 00:26:28.029 [2024-11-03 04:46:51.089543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:28.029 [2024-11-03 04:46:51.089551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 183488 00:26:28.029 [2024-11-03 04:46:51.089568] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 181504 00:26:28.029 [2024-11-03 04:46:51.089576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0109 00:26:28.029 [2024-11-03 04:46:51.089587] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:28.029 [2024-11-03 04:46:51.089595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:28.029 [2024-11-03 04:46:51.089602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:28.029 [2024-11-03 04:46:51.089614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:28.030 [2024-11-03 04:46:51.089621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:28.030 [2024-11-03 04:46:51.089628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.030 [2024-11-03 04:46:51.089635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:28.030 [2024-11-03 04:46:51.089643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:26:28.030 [2024-11-03 04:46:51.089650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.030 [2024-11-03 04:46:51.102154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.030 [2024-11-03 04:46:51.102186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:28.030 [2024-11-03 04:46:51.102201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.487 ms 00:26:28.030 [2024-11-03 04:46:51.102209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.030 [2024-11-03 04:46:51.102589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.030 [2024-11-03 04:46:51.102605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:28.030 [2024-11-03 04:46:51.102614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:26:28.030 [2024-11-03 04:46:51.102621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-03 04:46:51.136368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-03 04:46:51.136405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.291 [2024-11-03 04:46:51.136415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-03 04:46:51.136423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-03 04:46:51.136474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-03 04:46:51.136482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.291 [2024-11-03 04:46:51.136490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-03 04:46:51.136497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-03 04:46:51.136586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-03 04:46:51.136601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.291 [2024-11-03 04:46:51.136609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-03 04:46:51.136616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.291 [2024-11-03 04:46:51.136631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.291 [2024-11-03 04:46:51.136639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.291 [2024-11-03 04:46:51.136647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.291 [2024-11-03 04:46:51.136654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.217375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.217430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.292 [2024-11-03 04:46:51.217443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.217451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.287447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.287499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.292 [2024-11-03 04:46:51.287511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.287520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.287597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.287609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.292 [2024-11-03 04:46:51.287618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.287633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.287689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.287699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.292 [2024-11-03 04:46:51.287708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.287717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.287814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.287824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.292 [2024-11-03 04:46:51.287833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.287844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.287879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.287890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:28.292 [2024-11-03 04:46:51.287899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.287907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.287948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.287957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.292 [2024-11-03 04:46:51.287966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.287974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.288024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.292 [2024-11-03 04:46:51.288034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.292 [2024-11-03 04:46:51.288044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.292 [2024-11-03 04:46:51.288051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.292 [2024-11-03 04:46:51.288186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.912 ms, result 0 00:26:29.236 00:26:29.236 00:26:29.236 04:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:31.155 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:31.155 04:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:31.155 [2024-11-03 04:46:54.149376] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:26:31.155 [2024-11-03 04:46:54.149469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80077 ] 00:26:31.417 [2024-11-03 04:46:54.303673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.417 [2024-11-03 04:46:54.406460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.677 [2024-11-03 04:46:54.662309] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:31.677 [2024-11-03 04:46:54.662376] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:31.940 [2024-11-03 04:46:54.823842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.940 [2024-11-03 04:46:54.823905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:31.940 [2024-11-03 04:46:54.823923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:31.940 [2024-11-03 04:46:54.823932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.940 [2024-11-03 04:46:54.823986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.940 [2024-11-03 04:46:54.823997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:31.940 [2024-11-03 04:46:54.824008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:31.940 [2024-11-03 04:46:54.824017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.940 [2024-11-03 04:46:54.824037] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:31.941 [2024-11-03 04:46:54.824819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:31.941 [2024-11-03 04:46:54.824852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.824862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:31.941 [2024-11-03 04:46:54.824871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:26:31.941 [2024-11-03 04:46:54.824879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.826766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:31.941 [2024-11-03 04:46:54.840934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.840995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:31.941 [2024-11-03 04:46:54.841009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.169 ms 00:26:31.941 [2024-11-03 04:46:54.841017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.841090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.841103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:31.941 [2024-11-03 04:46:54.841112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:31.941 [2024-11-03 04:46:54.841120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.849019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.849058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:31.941 [2024-11-03 04:46:54.849068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.823 ms 00:26:31.941 [2024-11-03 04:46:54.849076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.849160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.849170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:31.941 [2024-11-03 04:46:54.849178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:31.941 [2024-11-03 04:46:54.849187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.849231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.849242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:31.941 [2024-11-03 04:46:54.849251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:31.941 [2024-11-03 04:46:54.849259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.849282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:31.941 [2024-11-03 04:46:54.853189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.853226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:31.941 [2024-11-03 04:46:54.853237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.912 ms 00:26:31.941 [2024-11-03 04:46:54.853248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.853283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.853292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:31.941 [2024-11-03 04:46:54.853301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:31.941 [2024-11-03 04:46:54.853308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.853359] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:31.941 [2024-11-03 04:46:54.853382] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:31.941 [2024-11-03 04:46:54.853420] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:31.941 [2024-11-03 04:46:54.853440] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:31.941 [2024-11-03 04:46:54.853545] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:31.941 [2024-11-03 04:46:54.853571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:31.941 [2024-11-03 04:46:54.853583] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:31.941 [2024-11-03 04:46:54.853594] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:31.941 [2024-11-03 04:46:54.853604] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:31.941 [2024-11-03 04:46:54.853612] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:31.941 [2024-11-03 04:46:54.853620] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:31.941 [2024-11-03 04:46:54.853628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:31.941 [2024-11-03 04:46:54.853636] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:31.941 [2024-11-03 04:46:54.853648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.853657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:31.941 [2024-11-03 04:46:54.853665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:26:31.941 [2024-11-03 04:46:54.853672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.853757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.941 [2024-11-03 04:46:54.853767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:31.941 [2024-11-03 04:46:54.853775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:31.941 [2024-11-03 04:46:54.853782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.941 [2024-11-03 04:46:54.853887] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:31.941 [2024-11-03 04:46:54.853907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:31.941 [2024-11-03 04:46:54.853917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:31.941 [2024-11-03 04:46:54.853925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.941 [2024-11-03 04:46:54.853933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:31.941 [2024-11-03 04:46:54.853940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:31.941 [2024-11-03 04:46:54.853947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:31.941 [2024-11-03 04:46:54.853955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:31.941 [2024-11-03 04:46:54.853962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:31.941 [2024-11-03 04:46:54.853969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:31.941 [2024-11-03 04:46:54.853976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:31.941 [2024-11-03 04:46:54.853983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:31.941 [2024-11-03 04:46:54.853990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:31.941 [2024-11-03 04:46:54.853999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:31.941 [2024-11-03 04:46:54.854007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:31.941 [2024-11-03 04:46:54.854020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:31.941 [2024-11-03 04:46:54.854034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:31.941 [2024-11-03 04:46:54.854040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:31.941 [2024-11-03 04:46:54.854054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.941 [2024-11-03 04:46:54.854068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:31.941 [2024-11-03 04:46:54.854075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.941 [2024-11-03 04:46:54.854090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:31.941 [2024-11-03 04:46:54.854097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.941 [2024-11-03 04:46:54.854110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:31.941 [2024-11-03 04:46:54.854117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:31.941 [2024-11-03 04:46:54.854131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:31.941 [2024-11-03 04:46:54.854137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:31.941 [2024-11-03 04:46:54.854150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:31.941 [2024-11-03 04:46:54.854156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:31.941 [2024-11-03 04:46:54.854163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:31.941 [2024-11-03 04:46:54.854169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:31.941 [2024-11-03 04:46:54.854176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:31.941 [2024-11-03 04:46:54.854182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:31.941 [2024-11-03 04:46:54.854195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:31.941 [2024-11-03 04:46:54.854201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.941 [2024-11-03 04:46:54.854207] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:31.941 [2024-11-03 04:46:54.854215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:31.942 [2024-11-03 04:46:54.854225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:31.942 [2024-11-03 04:46:54.854233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:31.942 [2024-11-03 04:46:54.854241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:31.942 [2024-11-03 04:46:54.854248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:31.942 [2024-11-03 04:46:54.854255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:31.942 [2024-11-03 04:46:54.854262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:31.942 [2024-11-03 04:46:54.854269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:31.942 [2024-11-03 04:46:54.854275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:31.942 [2024-11-03 04:46:54.854283] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:31.942 [2024-11-03 04:46:54.854294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:31.942 [2024-11-03 04:46:54.854309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:31.942 [2024-11-03 04:46:54.854316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:31.942 [2024-11-03 04:46:54.854324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:31.942 [2024-11-03 04:46:54.854331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:31.942 [2024-11-03 04:46:54.854338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:31.942 [2024-11-03 04:46:54.854345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:31.942 [2024-11-03 04:46:54.854352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:31.942 [2024-11-03 04:46:54.854359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:31.942 [2024-11-03 04:46:54.854365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:31.942 [2024-11-03 04:46:54.854399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:31.942 [2024-11-03 04:46:54.854408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:31.942 [2024-11-03 04:46:54.854426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:31.942 [2024-11-03 04:46:54.854434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:31.942 [2024-11-03 04:46:54.854441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:31.942 [2024-11-03 04:46:54.854448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.854455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:31.942 [2024-11-03 04:46:54.854464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:26:31.942 [2024-11-03 04:46:54.854472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.886486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.886536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:31.942 [2024-11-03 04:46:54.886550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.968 ms 00:26:31.942 [2024-11-03 04:46:54.886574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.886666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.886681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:31.942 [2024-11-03 04:46:54.886690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:31.942 [2024-11-03 04:46:54.886698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.944543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.944629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:31.942 [2024-11-03 04:46:54.944644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.787 ms 00:26:31.942 [2024-11-03 04:46:54.944653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.944703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.944713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:31.942 [2024-11-03 04:46:54.944723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:31.942 [2024-11-03 04:46:54.944735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.945336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.945373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:31.942 [2024-11-03 04:46:54.945384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:26:31.942 [2024-11-03 04:46:54.945393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.945550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.945580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:31.942 [2024-11-03 04:46:54.945589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:26:31.942 [2024-11-03 04:46:54.945597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.961118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.961159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:31.942 [2024-11-03 04:46:54.961170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.494 ms 00:26:31.942 [2024-11-03 04:46:54.961181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:54.975209] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:31.942 [2024-11-03 04:46:54.975256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:31.942 [2024-11-03 04:46:54.975271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:54.975279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:31.942 [2024-11-03 04:46:54.975289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.980 ms 00:26:31.942 [2024-11-03 04:46:54.975296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:55.001023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:55.001074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:31.942 [2024-11-03 04:46:55.001086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.675 ms 00:26:31.942 [2024-11-03 04:46:55.001094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.942 [2024-11-03 04:46:55.014000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.942 [2024-11-03 04:46:55.014044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:31.942 [2024-11-03 04:46:55.014055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.853 ms 00:26:31.942 [2024-11-03 04:46:55.014063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.026549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.026601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:32.204 [2024-11-03 04:46:55.026612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:26:32.204 [2024-11-03 04:46:55.026619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.027252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.027283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:32.204 [2024-11-03 04:46:55.027294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:26:32.204 [2024-11-03 04:46:55.027303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.091084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.091149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:32.204 [2024-11-03 04:46:55.091165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.757 ms 00:26:32.204 [2024-11-03 04:46:55.091181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.102499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:32.204 [2024-11-03 04:46:55.105403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.105445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:32.204 [2024-11-03 04:46:55.105457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:26:32.204 [2024-11-03 04:46:55.105465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.105550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.105580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:32.204 [2024-11-03 04:46:55.105590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:32.204 [2024-11-03 04:46:55.105599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.106406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.106454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:32.204 [2024-11-03 04:46:55.106467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:26:32.204 [2024-11-03 04:46:55.106476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.106511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.204 [2024-11-03 04:46:55.106521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:32.204 [2024-11-03 04:46:55.106530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:32.204 [2024-11-03 04:46:55.106539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.204 [2024-11-03 04:46:55.106605] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:32.205 [2024-11-03 04:46:55.106622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.205 [2024-11-03 04:46:55.106633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:32.205 [2024-11-03 04:46:55.106643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:32.205 [2024-11-03 04:46:55.106653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.205 [2024-11-03 04:46:55.132479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.205 [2024-11-03 04:46:55.132533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:32.205 [2024-11-03 04:46:55.132546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.803 ms 00:26:32.205 [2024-11-03 04:46:55.132586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.205 [2024-11-03 04:46:55.132679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.205 [2024-11-03 04:46:55.132690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:32.205 [2024-11-03 04:46:55.132700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:32.205 [2024-11-03 04:46:55.132708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.205 [2024-11-03 04:46:55.134309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.987 ms, result 0 00:26:33.593  [2024-11-03T04:46:57.621Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-03T04:46:58.565Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-03T04:46:59.511Z] Copying: 47/1024 [MB] (17 MBps) [2024-11-03T04:47:00.475Z] Copying: 58/1024 [MB] (11 MBps) [2024-11-03T04:47:01.423Z] Copying: 76/1024 [MB] (17 MBps) [2024-11-03T04:47:02.367Z] Copying: 99/1024 [MB] (22 MBps) [2024-11-03T04:47:03.313Z] Copying: 121/1024 [MB] (22 MBps) [2024-11-03T04:47:04.700Z] Copying: 138/1024 [MB] (16 MBps) [2024-11-03T04:47:05.644Z] Copying: 157/1024 [MB] (18 MBps) [2024-11-03T04:47:06.589Z] Copying: 175/1024 [MB] (18 MBps) [2024-11-03T04:47:07.533Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-03T04:47:08.478Z] Copying: 204/1024 [MB] (17 MBps) [2024-11-03T04:47:09.423Z] Copying: 217/1024 [MB] (13 MBps) [2024-11-03T04:47:10.368Z] Copying: 233/1024 [MB] (15 MBps) [2024-11-03T04:47:11.756Z] Copying: 249/1024 [MB] (16 MBps) [2024-11-03T04:47:12.331Z] Copying: 266/1024 [MB] (17 MBps) [2024-11-03T04:47:13.722Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-03T04:47:14.669Z] Copying: 288/1024 [MB] (10 MBps) [2024-11-03T04:47:15.611Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-03T04:47:16.555Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-03T04:47:17.526Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-03T04:47:18.467Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-03T04:47:19.409Z] Copying: 346/1024 [MB] (11 MBps) [2024-11-03T04:47:20.353Z] Copying: 358/1024 [MB] (11 MBps) [2024-11-03T04:47:21.741Z] Copying: 369/1024 [MB] (11 MBps) [2024-11-03T04:47:22.683Z] Copying: 381/1024 [MB] (11 MBps) [2024-11-03T04:47:23.626Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-03T04:47:24.570Z] Copying: 404/1024 [MB] (11 MBps) [2024-11-03T04:47:25.514Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-03T04:47:26.459Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-03T04:47:27.403Z] Copying: 439/1024 [MB] (11 MBps) [2024-11-03T04:47:28.347Z] Copying: 450/1024 [MB] (11 MBps) [2024-11-03T04:47:29.735Z] Copying: 462/1024 [MB] (11 MBps) [2024-11-03T04:47:30.684Z] Copying: 473/1024 [MB] (11 MBps) [2024-11-03T04:47:31.629Z] Copying: 486/1024 [MB] (13 MBps) [2024-11-03T04:47:32.573Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-03T04:47:33.517Z] Copying: 507/1024 [MB] (10 MBps) [2024-11-03T04:47:34.480Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-03T04:47:35.434Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-03T04:47:36.379Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-03T04:47:37.323Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-03T04:47:38.712Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-03T04:47:39.655Z] Copying: 574/1024 [MB] (11 MBps) [2024-11-03T04:47:40.597Z] Copying: 585/1024 [MB] (11 MBps) [2024-11-03T04:47:41.539Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-03T04:47:42.482Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-03T04:47:43.425Z] Copying: 621/1024 [MB] (13 MBps) [2024-11-03T04:47:44.369Z] Copying: 632/1024 [MB] (11 MBps) [2024-11-03T04:47:45.756Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-03T04:47:46.329Z] Copying: 660/1024 [MB] (17 MBps) [2024-11-03T04:47:47.717Z] Copying: 675/1024 [MB] (14 MBps) [2024-11-03T04:47:48.662Z] Copying: 693/1024 [MB] (17 MBps) [2024-11-03T04:47:49.607Z] Copying: 709/1024 [MB] (15 MBps) [2024-11-03T04:47:50.550Z] Copying: 721/1024 [MB] (12 MBps) [2024-11-03T04:47:51.495Z] Copying: 743/1024 [MB] (22 MBps) [2024-11-03T04:47:52.468Z] Copying: 767/1024 [MB] (23 MBps) [2024-11-03T04:47:53.412Z] Copying: 786/1024 [MB] (19 MBps) [2024-11-03T04:47:54.356Z] Copying: 801/1024 [MB] (14 MBps) [2024-11-03T04:47:55.744Z] Copying: 819/1024 [MB] (17 MBps) [2024-11-03T04:47:56.317Z] Copying: 831/1024 [MB] (11 MBps) [2024-11-03T04:47:57.707Z] Copying: 846/1024 [MB] (15 MBps) [2024-11-03T04:47:58.652Z] Copying: 860/1024 [MB] (14 MBps) [2024-11-03T04:47:59.597Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-03T04:48:00.535Z] Copying: 883/1024 [MB] (10 MBps) [2024-11-03T04:48:01.480Z] Copying: 903/1024 [MB] (20 MBps) [2024-11-03T04:48:02.422Z] Copying: 917/1024 [MB] (14 MBps) [2024-11-03T04:48:03.365Z] Copying: 934/1024 [MB] (17 MBps) [2024-11-03T04:48:04.752Z] Copying: 951/1024 [MB] (16 MBps) [2024-11-03T04:48:05.325Z] Copying: 963/1024 [MB] (11 MBps) [2024-11-03T04:48:06.711Z] Copying: 976/1024 [MB] (13 MBps) [2024-11-03T04:48:07.654Z] Copying: 991/1024 [MB] (14 MBps) [2024-11-03T04:48:08.599Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-03T04:48:09.577Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-03T04:48:09.577Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-03T04:48:09.577Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-03 04:48:09.508218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.508497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:46.493 [2024-11-03 04:48:09.508525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:46.493 [2024-11-03 04:48:09.508539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.508625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.493 [2024-11-03 04:48:09.512517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.512737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:46.493 [2024-11-03 04:48:09.512757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.869 ms 00:27:46.493 [2024-11-03 04:48:09.512774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.513046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.513059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:46.493 [2024-11-03 04:48:09.513071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:27:46.493 [2024-11-03 04:48:09.513080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.518148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.518176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:46.493 [2024-11-03 04:48:09.518189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.051 ms 00:27:46.493 [2024-11-03 04:48:09.518199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.524064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.524163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:46.493 [2024-11-03 04:48:09.524177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.840 ms 00:27:46.493 [2024-11-03 04:48:09.524183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.543861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.543956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:46.493 [2024-11-03 04:48:09.544000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.633 ms 00:27:46.493 [2024-11-03 04:48:09.544019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.565286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.565448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:46.493 [2024-11-03 04:48:09.565523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.180 ms 00:27:46.493 [2024-11-03 04:48:09.565548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.493 [2024-11-03 04:48:09.570516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.493 [2024-11-03 04:48:09.570628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:46.493 [2024-11-03 04:48:09.570688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.898 ms 00:27:46.493 [2024-11-03 04:48:09.570711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.756 [2024-11-03 04:48:09.594819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.756 [2024-11-03 04:48:09.594932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:46.756 [2024-11-03 04:48:09.594982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.079 ms 00:27:46.756 [2024-11-03 04:48:09.595004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.756 [2024-11-03 04:48:09.618716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.756 [2024-11-03 04:48:09.618851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:46.756 [2024-11-03 04:48:09.618900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.620 ms 00:27:46.756 [2024-11-03 04:48:09.618922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.756 [2024-11-03 04:48:09.642083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.756 [2024-11-03 04:48:09.642200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:46.756 [2024-11-03 04:48:09.642254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.121 ms 00:27:46.756 [2024-11-03 04:48:09.642275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.756 [2024-11-03 04:48:09.665554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.756 [2024-11-03 04:48:09.665699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:46.756 [2024-11-03 04:48:09.665752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.156 ms 00:27:46.756 [2024-11-03 04:48:09.665773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.756 [2024-11-03 04:48:09.665818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:46.756 [2024-11-03 04:48:09.665846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.756 [2024-11-03 04:48:09.665885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:46.756 [2024-11-03 04:48:09.665915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.665984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.666997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.667953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:46.756 [2024-11-03 04:48:09.668434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.668964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:46.757 [2024-11-03 04:48:09.669332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:46.757 [2024-11-03 04:48:09.669341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 91672ceb-899a-43cc-a534-26de9442945d 00:27:46.757 [2024-11-03 04:48:09.669354] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:46.757 [2024-11-03 04:48:09.669363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:46.757 [2024-11-03 04:48:09.669370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:46.757 [2024-11-03 04:48:09.669379] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:46.757 [2024-11-03 04:48:09.669386] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:46.757 [2024-11-03 04:48:09.669395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:46.757 [2024-11-03 04:48:09.669409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:46.757 [2024-11-03 04:48:09.669416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:46.757 [2024-11-03 04:48:09.669423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:46.757 [2024-11-03 04:48:09.669432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.757 [2024-11-03 04:48:09.669442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:46.757 [2024-11-03 04:48:09.669451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:27:46.757 [2024-11-03 04:48:09.669459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.682807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.757 [2024-11-03 04:48:09.682940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:46.757 [2024-11-03 04:48:09.682958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.324 ms 00:27:46.757 [2024-11-03 04:48:09.682966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.683356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.757 [2024-11-03 04:48:09.683366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:46.757 [2024-11-03 04:48:09.683376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:27:46.757 [2024-11-03 04:48:09.683386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.719589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.757 [2024-11-03 04:48:09.719629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.757 [2024-11-03 04:48:09.719641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.757 [2024-11-03 04:48:09.719652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.719743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.757 [2024-11-03 04:48:09.719754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.757 [2024-11-03 04:48:09.719764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.757 [2024-11-03 04:48:09.719781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.719883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.757 [2024-11-03 04:48:09.719896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.757 [2024-11-03 04:48:09.719906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.757 [2024-11-03 04:48:09.719915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.719934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.757 [2024-11-03 04:48:09.719949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.757 [2024-11-03 04:48:09.719960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.757 [2024-11-03 04:48:09.719969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.757 [2024-11-03 04:48:09.805064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.757 [2024-11-03 04:48:09.805113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.757 [2024-11-03 04:48:09.805126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.757 [2024-11-03 04:48:09.805135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.874542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.874623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:47.019 [2024-11-03 04:48:09.874636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.874653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.874721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.874732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:47.019 [2024-11-03 04:48:09.874741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.874751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.874815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.874827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:47.019 [2024-11-03 04:48:09.874836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.874845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.874951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.874963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:47.019 [2024-11-03 04:48:09.874971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.874980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.875014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.875024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:47.019 [2024-11-03 04:48:09.875034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.875044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.875089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.875101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:47.019 [2024-11-03 04:48:09.875109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.875118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.875171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.019 [2024-11-03 04:48:09.875193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:47.019 [2024-11-03 04:48:09.875203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.019 [2024-11-03 04:48:09.875212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.019 [2024-11-03 04:48:09.875353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.100 ms, result 0 00:27:47.592 00:27:47.592 00:27:47.592 04:48:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:50.139 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:50.139 04:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:50.139 04:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:50.139 04:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:50.139 04:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:50.139 04:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77841 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 77841 ']' 00:27:50.139 Process with pid 77841 is not found 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 77841 00:27:50.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77841) - No such process 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 77841 is not found' 00:27:50.139 04:48:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:50.401 Remove shared memory files 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:50.401 ************************************ 00:27:50.401 END TEST ftl_dirty_shutdown 00:27:50.401 ************************************ 00:27:50.401 00:27:50.401 real 4m50.818s 00:27:50.401 user 5m23.033s 00:27:50.401 sys 0m29.198s 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:50.401 04:48:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.662 04:48:13 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:50.662 04:48:13 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:50.662 04:48:13 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.662 04:48:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:50.662 ************************************ 00:27:50.662 START TEST ftl_upgrade_shutdown 00:27:50.662 ************************************ 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:50.662 * Looking for test storage... 00:27:50.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:50.662 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:50.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.663 --rc genhtml_branch_coverage=1 00:27:50.663 --rc genhtml_function_coverage=1 00:27:50.663 --rc genhtml_legend=1 00:27:50.663 --rc geninfo_all_blocks=1 00:27:50.663 --rc geninfo_unexecuted_blocks=1 00:27:50.663 00:27:50.663 ' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:50.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.663 --rc genhtml_branch_coverage=1 00:27:50.663 --rc genhtml_function_coverage=1 00:27:50.663 --rc genhtml_legend=1 00:27:50.663 --rc geninfo_all_blocks=1 00:27:50.663 --rc geninfo_unexecuted_blocks=1 00:27:50.663 00:27:50.663 ' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:50.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.663 --rc genhtml_branch_coverage=1 00:27:50.663 --rc genhtml_function_coverage=1 00:27:50.663 --rc genhtml_legend=1 00:27:50.663 --rc geninfo_all_blocks=1 00:27:50.663 --rc geninfo_unexecuted_blocks=1 00:27:50.663 00:27:50.663 ' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:50.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.663 --rc genhtml_branch_coverage=1 00:27:50.663 --rc genhtml_function_coverage=1 00:27:50.663 --rc genhtml_legend=1 00:27:50.663 --rc geninfo_all_blocks=1 00:27:50.663 --rc geninfo_unexecuted_blocks=1 00:27:50.663 00:27:50.663 ' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:50.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80955 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80955 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80955 ']' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.663 04:48:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:50.924 [2024-11-03 04:48:13.794699] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:50.924 [2024-11-03 04:48:13.794864] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80955 ] 00:27:50.924 [2024-11-03 04:48:13.964345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.185 [2024-11-03 04:48:14.087176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:51.759 04:48:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:52.020 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:52.282 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:52.282 { 00:27:52.282 "name": "basen1", 00:27:52.282 "aliases": [ 00:27:52.282 "4efea46a-9b91-43e7-ad52-b0407ac80458" 00:27:52.282 ], 00:27:52.282 "product_name": "NVMe disk", 00:27:52.282 "block_size": 4096, 00:27:52.282 "num_blocks": 1310720, 00:27:52.282 "uuid": "4efea46a-9b91-43e7-ad52-b0407ac80458", 00:27:52.282 "numa_id": -1, 00:27:52.282 "assigned_rate_limits": { 00:27:52.282 "rw_ios_per_sec": 0, 00:27:52.282 "rw_mbytes_per_sec": 0, 00:27:52.282 "r_mbytes_per_sec": 0, 00:27:52.282 "w_mbytes_per_sec": 0 00:27:52.282 }, 00:27:52.282 "claimed": true, 00:27:52.282 "claim_type": "read_many_write_one", 00:27:52.282 "zoned": false, 00:27:52.282 "supported_io_types": { 00:27:52.282 "read": true, 00:27:52.282 "write": true, 00:27:52.282 "unmap": true, 00:27:52.282 "flush": true, 00:27:52.282 "reset": true, 00:27:52.282 "nvme_admin": true, 00:27:52.282 "nvme_io": true, 00:27:52.282 "nvme_io_md": false, 00:27:52.282 "write_zeroes": true, 00:27:52.282 "zcopy": false, 00:27:52.282 "get_zone_info": false, 00:27:52.282 "zone_management": false, 00:27:52.282 "zone_append": false, 00:27:52.282 "compare": true, 00:27:52.282 "compare_and_write": false, 00:27:52.282 "abort": true, 00:27:52.282 "seek_hole": false, 00:27:52.282 "seek_data": false, 00:27:52.282 "copy": true, 00:27:52.282 "nvme_iov_md": false 00:27:52.282 }, 00:27:52.282 "driver_specific": { 00:27:52.282 "nvme": [ 00:27:52.282 { 00:27:52.282 "pci_address": "0000:00:11.0", 00:27:52.282 "trid": { 00:27:52.282 "trtype": "PCIe", 00:27:52.282 "traddr": "0000:00:11.0" 00:27:52.282 }, 00:27:52.282 "ctrlr_data": { 00:27:52.282 "cntlid": 0, 00:27:52.282 "vendor_id": "0x1b36", 00:27:52.282 "model_number": "QEMU NVMe Ctrl", 00:27:52.282 "serial_number": "12341", 00:27:52.282 "firmware_revision": "8.0.0", 00:27:52.282 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:52.282 "oacs": { 00:27:52.282 "security": 0, 00:27:52.282 "format": 1, 00:27:52.282 "firmware": 0, 00:27:52.282 "ns_manage": 1 00:27:52.282 }, 00:27:52.282 "multi_ctrlr": false, 00:27:52.282 "ana_reporting": false 00:27:52.282 }, 00:27:52.282 "vs": { 00:27:52.282 "nvme_version": "1.4" 00:27:52.282 }, 00:27:52.282 "ns_data": { 00:27:52.282 "id": 1, 00:27:52.282 "can_share": false 00:27:52.282 } 00:27:52.282 } 00:27:52.282 ], 00:27:52.282 "mp_policy": "active_passive" 00:27:52.282 } 00:27:52.282 } 00:27:52.282 ]' 00:27:52.282 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:52.282 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:52.282 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:52.542 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:52.542 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9683a622-9dca-4514-8196-df193d14c71f 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:52.543 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9683a622-9dca-4514-8196-df193d14c71f 00:27:52.803 04:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:53.064 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0306e4b3-a171-45d7-b382-f6d3235984dd 00:27:53.064 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0306e4b3-a171-45d7-b382-f6d3235984dd 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=979bc84a-4c00-4720-ad2b-5db692ca1c07 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 979bc84a-4c00-4720-ad2b-5db692ca1c07 ]] 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 979bc84a-4c00-4720-ad2b-5db692ca1c07 5120 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=979bc84a-4c00-4720-ad2b-5db692ca1c07 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 979bc84a-4c00-4720-ad2b-5db692ca1c07 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=979bc84a-4c00-4720-ad2b-5db692ca1c07 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:53.324 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:53.325 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 979bc84a-4c00-4720-ad2b-5db692ca1c07 00:27:53.585 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:53.585 { 00:27:53.585 "name": "979bc84a-4c00-4720-ad2b-5db692ca1c07", 00:27:53.585 "aliases": [ 00:27:53.585 "lvs/basen1p0" 00:27:53.585 ], 00:27:53.585 "product_name": "Logical Volume", 00:27:53.585 "block_size": 4096, 00:27:53.585 "num_blocks": 5242880, 00:27:53.585 "uuid": "979bc84a-4c00-4720-ad2b-5db692ca1c07", 00:27:53.585 "assigned_rate_limits": { 00:27:53.585 "rw_ios_per_sec": 0, 00:27:53.585 "rw_mbytes_per_sec": 0, 00:27:53.585 "r_mbytes_per_sec": 0, 00:27:53.585 "w_mbytes_per_sec": 0 00:27:53.585 }, 00:27:53.585 "claimed": false, 00:27:53.585 "zoned": false, 00:27:53.585 "supported_io_types": { 00:27:53.585 "read": true, 00:27:53.585 "write": true, 00:27:53.585 "unmap": true, 00:27:53.585 "flush": false, 00:27:53.585 "reset": true, 00:27:53.585 "nvme_admin": false, 00:27:53.585 "nvme_io": false, 00:27:53.585 "nvme_io_md": false, 00:27:53.585 "write_zeroes": true, 00:27:53.585 "zcopy": false, 00:27:53.585 "get_zone_info": false, 00:27:53.585 "zone_management": false, 00:27:53.585 "zone_append": false, 00:27:53.585 "compare": false, 00:27:53.585 "compare_and_write": false, 00:27:53.585 "abort": false, 00:27:53.585 "seek_hole": true, 00:27:53.585 "seek_data": true, 00:27:53.585 "copy": false, 00:27:53.585 "nvme_iov_md": false 00:27:53.585 }, 00:27:53.585 "driver_specific": { 00:27:53.585 "lvol": { 00:27:53.585 "lvol_store_uuid": "0306e4b3-a171-45d7-b382-f6d3235984dd", 00:27:53.586 "base_bdev": "basen1", 00:27:53.586 "thin_provision": true, 00:27:53.586 "num_allocated_clusters": 0, 00:27:53.586 "snapshot": false, 00:27:53.586 "clone": false, 00:27:53.586 "esnap_clone": false 00:27:53.586 } 00:27:53.586 } 00:27:53.586 } 00:27:53.586 ]' 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:53.586 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:53.845 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:53.845 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:53.845 04:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:54.103 04:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:54.103 04:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:54.103 04:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 979bc84a-4c00-4720-ad2b-5db692ca1c07 -c cachen1p0 --l2p_dram_limit 2 00:27:54.363 [2024-11-03 04:48:17.270651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.270689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:54.363 [2024-11-03 04:48:17.270701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:54.363 [2024-11-03 04:48:17.270708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.270752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.270760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:54.363 [2024-11-03 04:48:17.270768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:54.363 [2024-11-03 04:48:17.270774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.270790] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:54.363 [2024-11-03 04:48:17.271362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:54.363 [2024-11-03 04:48:17.271384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.271391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:54.363 [2024-11-03 04:48:17.271399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.595 ms 00:27:54.363 [2024-11-03 04:48:17.271404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.271456] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2ea45cbb-4852-4dab-a4cc-8dee0eeeccb5 00:27:54.363 [2024-11-03 04:48:17.272427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.272455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:54.363 [2024-11-03 04:48:17.272464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:54.363 [2024-11-03 04:48:17.272471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.277245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.277277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:54.363 [2024-11-03 04:48:17.277284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.741 ms 00:27:54.363 [2024-11-03 04:48:17.277294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.277324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.277333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:54.363 [2024-11-03 04:48:17.277339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:54.363 [2024-11-03 04:48:17.277347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.277377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.277388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:54.363 [2024-11-03 04:48:17.277394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:54.363 [2024-11-03 04:48:17.277401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.277419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:54.363 [2024-11-03 04:48:17.280293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.280318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:54.363 [2024-11-03 04:48:17.280326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.879 ms 00:27:54.363 [2024-11-03 04:48:17.280335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.280356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.363 [2024-11-03 04:48:17.280363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:54.363 [2024-11-03 04:48:17.280370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:54.363 [2024-11-03 04:48:17.280376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.363 [2024-11-03 04:48:17.280395] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:54.363 [2024-11-03 04:48:17.280500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:54.363 [2024-11-03 04:48:17.280512] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:54.363 [2024-11-03 04:48:17.280520] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:54.363 [2024-11-03 04:48:17.280529] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:54.363 [2024-11-03 04:48:17.280535] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:54.363 [2024-11-03 04:48:17.280543] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:54.363 [2024-11-03 04:48:17.280549] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:54.364 [2024-11-03 04:48:17.280555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:54.364 [2024-11-03 04:48:17.280584] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:54.364 [2024-11-03 04:48:17.280593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.364 [2024-11-03 04:48:17.280599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:54.364 [2024-11-03 04:48:17.280607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:27:54.364 [2024-11-03 04:48:17.280612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.364 [2024-11-03 04:48:17.280677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.364 [2024-11-03 04:48:17.280684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:54.364 [2024-11-03 04:48:17.280692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:54.364 [2024-11-03 04:48:17.280702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.364 [2024-11-03 04:48:17.280776] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:54.364 [2024-11-03 04:48:17.280785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:54.364 [2024-11-03 04:48:17.280793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:54.364 [2024-11-03 04:48:17.280799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:54.364 [2024-11-03 04:48:17.280811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:54.364 [2024-11-03 04:48:17.280823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:54.364 [2024-11-03 04:48:17.280829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:54.364 [2024-11-03 04:48:17.280833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:54.364 [2024-11-03 04:48:17.280845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:54.364 [2024-11-03 04:48:17.280851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:54.364 [2024-11-03 04:48:17.280862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:54.364 [2024-11-03 04:48:17.280867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:54.364 [2024-11-03 04:48:17.280880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:54.364 [2024-11-03 04:48:17.280887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:54.364 [2024-11-03 04:48:17.280899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:54.364 [2024-11-03 04:48:17.280904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:54.364 [2024-11-03 04:48:17.280913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:54.364 [2024-11-03 04:48:17.280918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:54.364 [2024-11-03 04:48:17.280924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:54.364 [2024-11-03 04:48:17.280929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:54.364 [2024-11-03 04:48:17.280935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:54.364 [2024-11-03 04:48:17.280940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:54.364 [2024-11-03 04:48:17.280946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:54.364 [2024-11-03 04:48:17.280951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:54.364 [2024-11-03 04:48:17.280957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:54.364 [2024-11-03 04:48:17.280962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:54.364 [2024-11-03 04:48:17.280969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:54.364 [2024-11-03 04:48:17.280974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:54.364 [2024-11-03 04:48:17.280985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:54.364 [2024-11-03 04:48:17.280991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.280996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:54.364 [2024-11-03 04:48:17.281002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:54.364 [2024-11-03 04:48:17.281007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.281012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:54.364 [2024-11-03 04:48:17.281018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:54.364 [2024-11-03 04:48:17.281024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.281028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:54.364 [2024-11-03 04:48:17.281035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:54.364 [2024-11-03 04:48:17.281040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:54.364 [2024-11-03 04:48:17.281047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:54.364 [2024-11-03 04:48:17.281053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:54.364 [2024-11-03 04:48:17.281062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:54.364 [2024-11-03 04:48:17.281066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:54.364 [2024-11-03 04:48:17.281073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:54.364 [2024-11-03 04:48:17.281077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:54.364 [2024-11-03 04:48:17.281084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:54.364 [2024-11-03 04:48:17.281093] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:54.364 [2024-11-03 04:48:17.281102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:54.364 [2024-11-03 04:48:17.281115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:54.364 [2024-11-03 04:48:17.281133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:54.364 [2024-11-03 04:48:17.281140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:54.364 [2024-11-03 04:48:17.281145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:54.364 [2024-11-03 04:48:17.281151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:54.364 [2024-11-03 04:48:17.281199] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:54.364 [2024-11-03 04:48:17.281207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:54.364 [2024-11-03 04:48:17.281222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:54.364 [2024-11-03 04:48:17.281227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:54.364 [2024-11-03 04:48:17.281234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:54.364 [2024-11-03 04:48:17.281239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.364 [2024-11-03 04:48:17.281246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:54.364 [2024-11-03 04:48:17.281252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:27:54.364 [2024-11-03 04:48:17.281258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.364 [2024-11-03 04:48:17.281287] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:54.364 [2024-11-03 04:48:17.281297] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:58.574 [2024-11-03 04:48:20.964401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:20.964500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:58.574 [2024-11-03 04:48:20.964518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3683.098 ms 00:27:58.574 [2024-11-03 04:48:20.964530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:20.996364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:20.996436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.574 [2024-11-03 04:48:20.996451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.547 ms 00:27:58.574 [2024-11-03 04:48:20.996463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:20.996554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:20.996609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:58.574 [2024-11-03 04:48:20.996619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:58.574 [2024-11-03 04:48:20.996633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.031823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.031879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.574 [2024-11-03 04:48:21.031892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.151 ms 00:27:58.574 [2024-11-03 04:48:21.031903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.031937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.031949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.574 [2024-11-03 04:48:21.031958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:58.574 [2024-11-03 04:48:21.031971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.032550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.032625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.574 [2024-11-03 04:48:21.032636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:27:58.574 [2024-11-03 04:48:21.032647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.032700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.032711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.574 [2024-11-03 04:48:21.032720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:58.574 [2024-11-03 04:48:21.032733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.050591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.050639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.574 [2024-11-03 04:48:21.050652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.833 ms 00:27:58.574 [2024-11-03 04:48:21.050666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.063672] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:58.574 [2024-11-03 04:48:21.064990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.065031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:58.574 [2024-11-03 04:48:21.065045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.233 ms 00:27:58.574 [2024-11-03 04:48:21.065053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.100320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.100382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:58.574 [2024-11-03 04:48:21.100401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.232 ms 00:27:58.574 [2024-11-03 04:48:21.100410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.100519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.100531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:58.574 [2024-11-03 04:48:21.100546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:58.574 [2024-11-03 04:48:21.100584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.125416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.125468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:58.574 [2024-11-03 04:48:21.125484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.775 ms 00:27:58.574 [2024-11-03 04:48:21.125493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.149997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.150048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:58.574 [2024-11-03 04:48:21.150062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.452 ms 00:27:58.574 [2024-11-03 04:48:21.150070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.150692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.150718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.574 [2024-11-03 04:48:21.150730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:27:58.574 [2024-11-03 04:48:21.150738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.232911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.232965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:58.574 [2024-11-03 04:48:21.232985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.128 ms 00:27:58.574 [2024-11-03 04:48:21.232994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.259807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.259859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:58.574 [2024-11-03 04:48:21.259888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.714 ms 00:27:58.574 [2024-11-03 04:48:21.259897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.285861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.285908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:58.574 [2024-11-03 04:48:21.285922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.909 ms 00:27:58.574 [2024-11-03 04:48:21.285929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.311980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.312032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:58.574 [2024-11-03 04:48:21.312047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.996 ms 00:27:58.574 [2024-11-03 04:48:21.312054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.312109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.312119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:58.574 [2024-11-03 04:48:21.312135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:58.574 [2024-11-03 04:48:21.312143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.312242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.574 [2024-11-03 04:48:21.312253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:58.574 [2024-11-03 04:48:21.312263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:58.574 [2024-11-03 04:48:21.312271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.574 [2024-11-03 04:48:21.313729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4042.552 ms, result 0 00:27:58.574 { 00:27:58.574 "name": "ftl", 00:27:58.574 "uuid": "2ea45cbb-4852-4dab-a4cc-8dee0eeeccb5" 00:27:58.574 } 00:27:58.574 04:48:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:58.574 [2024-11-03 04:48:21.528535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.575 04:48:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:58.835 04:48:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:59.095 [2024-11-03 04:48:21.945083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:59.095 04:48:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:59.095 [2024-11-03 04:48:22.154415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.095 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:59.662 Fill FTL, iteration 1 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81077 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81077 /var/tmp/spdk.tgt.sock 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81077 ']' 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:59.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:59.662 04:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.662 [2024-11-03 04:48:22.583045] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:59.662 [2024-11-03 04:48:22.583168] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81077 ] 00:27:59.662 [2024-11-03 04:48:22.740039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.920 [2024-11-03 04:48:22.832882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.487 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:00.487 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:00.487 04:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:00.745 ftln1 00:28:00.745 04:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:00.745 04:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81077 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81077 ']' 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81077 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81077 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:01.004 killing process with pid 81077 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81077' 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81077 00:28:01.004 04:48:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81077 00:28:02.380 04:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:02.380 04:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:02.380 [2024-11-03 04:48:25.157712] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:02.380 [2024-11-03 04:48:25.157817] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81119 ] 00:28:02.380 [2024-11-03 04:48:25.322874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.380 [2024-11-03 04:48:25.400756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.785  [2024-11-03T04:48:27.804Z] Copying: 255/1024 [MB] (255 MBps) [2024-11-03T04:48:28.739Z] Copying: 500/1024 [MB] (245 MBps) [2024-11-03T04:48:30.114Z] Copying: 743/1024 [MB] (243 MBps) [2024-11-03T04:48:30.114Z] Copying: 993/1024 [MB] (250 MBps) [2024-11-03T04:48:30.682Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:28:07.598 00:28:07.598 Calculate MD5 checksum, iteration 1 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:07.598 04:48:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:07.598 [2024-11-03 04:48:30.477620] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:07.598 [2024-11-03 04:48:30.477934] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81177 ] 00:28:07.598 [2024-11-03 04:48:30.635676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.856 [2024-11-03 04:48:30.713525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.229  [2024-11-03T04:48:32.880Z] Copying: 633/1024 [MB] (633 MBps) [2024-11-03T04:48:33.140Z] Copying: 1024/1024 [MB] (average 621 MBps) 00:28:10.056 00:28:10.056 04:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:10.056 04:48:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:12.600 Fill FTL, iteration 2 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2ebc502cf140878a7c7c7822015334da 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:12.600 04:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:12.600 [2024-11-03 04:48:35.376849] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:12.600 [2024-11-03 04:48:35.376974] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81235 ] 00:28:12.600 [2024-11-03 04:48:35.534518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.600 [2024-11-03 04:48:35.607122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.974  [2024-11-03T04:48:37.992Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-03T04:48:38.925Z] Copying: 510/1024 [MB] (250 MBps) [2024-11-03T04:48:40.307Z] Copying: 757/1024 [MB] (247 MBps) [2024-11-03T04:48:40.307Z] Copying: 1007/1024 [MB] (250 MBps) [2024-11-03T04:48:40.566Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:28:17.482 00:28:17.482 Calculate MD5 checksum, iteration 2 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:17.482 04:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:17.742 [2024-11-03 04:48:40.627215] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:17.742 [2024-11-03 04:48:40.627524] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81288 ] 00:28:17.742 [2024-11-03 04:48:40.790549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.002 [2024-11-03 04:48:40.886543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.382  [2024-11-03T04:48:43.031Z] Copying: 651/1024 [MB] (651 MBps) [2024-11-03T04:48:43.966Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:28:20.882 00:28:20.883 04:48:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:20.883 04:48:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:22.821 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:22.821 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ce53b96b1941889079d62b4b08072eec 00:28:22.821 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:22.821 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:22.821 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:22.821 [2024-11-03 04:48:45.687725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.821 [2024-11-03 04:48:45.687772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:22.821 [2024-11-03 04:48:45.687786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:22.821 [2024-11-03 04:48:45.687793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.821 [2024-11-03 04:48:45.687813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.821 [2024-11-03 04:48:45.687821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:22.821 [2024-11-03 04:48:45.687828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:22.821 [2024-11-03 04:48:45.687835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.821 [2024-11-03 04:48:45.687853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.821 [2024-11-03 04:48:45.687860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:22.821 [2024-11-03 04:48:45.687867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:22.821 [2024-11-03 04:48:45.687873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.821 [2024-11-03 04:48:45.687926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.193 ms, result 0 00:28:22.821 true 00:28:22.821 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:22.821 { 00:28:22.821 "name": "ftl", 00:28:22.821 "properties": [ 00:28:22.821 { 00:28:22.821 "name": "superblock_version", 00:28:22.821 "value": 5, 00:28:22.821 "read-only": true 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "name": "base_device", 00:28:22.821 "bands": [ 00:28:22.821 { 00:28:22.821 "id": 0, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 1, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 2, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 3, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 4, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 5, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 6, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 7, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 8, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 9, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 10, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 11, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 12, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 13, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 14, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 15, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 16, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 17, 00:28:22.821 "state": "FREE", 00:28:22.821 "validity": 0.0 00:28:22.821 } 00:28:22.821 ], 00:28:22.821 "read-only": true 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "name": "cache_device", 00:28:22.821 "type": "bdev", 00:28:22.821 "chunks": [ 00:28:22.821 { 00:28:22.821 "id": 0, 00:28:22.821 "state": "INACTIVE", 00:28:22.821 "utilization": 0.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 1, 00:28:22.821 "state": "CLOSED", 00:28:22.821 "utilization": 1.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 2, 00:28:22.821 "state": "CLOSED", 00:28:22.821 "utilization": 1.0 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 3, 00:28:22.821 "state": "OPEN", 00:28:22.821 "utilization": 0.001953125 00:28:22.821 }, 00:28:22.821 { 00:28:22.821 "id": 4, 00:28:22.821 "state": "OPEN", 00:28:22.821 "utilization": 0.0 00:28:22.821 } 00:28:22.821 ], 00:28:22.821 "read-only": true 00:28:22.821 }, 00:28:22.821 { 00:28:22.822 "name": "verbose_mode", 00:28:22.822 "value": true, 00:28:22.822 "unit": "", 00:28:22.822 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:22.822 }, 00:28:22.822 { 00:28:22.822 "name": "prep_upgrade_on_shutdown", 00:28:22.822 "value": false, 00:28:22.822 "unit": "", 00:28:22.822 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:22.822 } 00:28:22.822 ] 00:28:22.822 } 00:28:22.822 04:48:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:23.082 [2024-11-03 04:48:46.007889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.082 [2024-11-03 04:48:46.007924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:23.082 [2024-11-03 04:48:46.007933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:23.082 [2024-11-03 04:48:46.007939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.082 [2024-11-03 04:48:46.007956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.082 [2024-11-03 04:48:46.007963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:23.082 [2024-11-03 04:48:46.007969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.082 [2024-11-03 04:48:46.007975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.082 [2024-11-03 04:48:46.007989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.082 [2024-11-03 04:48:46.007996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:23.082 [2024-11-03 04:48:46.008002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:23.082 [2024-11-03 04:48:46.008008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.082 [2024-11-03 04:48:46.008050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.153 ms, result 0 00:28:23.082 true 00:28:23.082 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:23.082 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:23.082 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.343 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:23.343 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:23.343 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:23.343 [2024-11-03 04:48:46.392198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.343 [2024-11-03 04:48:46.393227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:23.343 [2024-11-03 04:48:46.393486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:23.343 [2024-11-03 04:48:46.393591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.343 [2024-11-03 04:48:46.393864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.343 [2024-11-03 04:48:46.393964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:23.343 [2024-11-03 04:48:46.394226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:23.343 [2024-11-03 04:48:46.394259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.343 [2024-11-03 04:48:46.394326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.343 [2024-11-03 04:48:46.394350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:23.343 [2024-11-03 04:48:46.394372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:23.343 [2024-11-03 04:48:46.394393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.343 [2024-11-03 04:48:46.394606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 2.266 ms, result 0 00:28:23.343 true 00:28:23.343 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.605 { 00:28:23.605 "name": "ftl", 00:28:23.605 "properties": [ 00:28:23.605 { 00:28:23.605 "name": "superblock_version", 00:28:23.605 "value": 5, 00:28:23.605 "read-only": true 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "name": "base_device", 00:28:23.605 "bands": [ 00:28:23.605 { 00:28:23.605 "id": 0, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 1, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 2, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 3, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 4, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 5, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 6, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 7, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 8, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 9, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 10, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 11, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 12, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 13, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 14, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 15, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 16, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 17, 00:28:23.605 "state": "FREE", 00:28:23.605 "validity": 0.0 00:28:23.605 } 00:28:23.605 ], 00:28:23.605 "read-only": true 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "name": "cache_device", 00:28:23.605 "type": "bdev", 00:28:23.605 "chunks": [ 00:28:23.605 { 00:28:23.605 "id": 0, 00:28:23.605 "state": "INACTIVE", 00:28:23.605 "utilization": 0.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 1, 00:28:23.605 "state": "CLOSED", 00:28:23.605 "utilization": 1.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 2, 00:28:23.605 "state": "CLOSED", 00:28:23.605 "utilization": 1.0 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 3, 00:28:23.605 "state": "OPEN", 00:28:23.605 "utilization": 0.001953125 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "id": 4, 00:28:23.605 "state": "OPEN", 00:28:23.605 "utilization": 0.0 00:28:23.605 } 00:28:23.605 ], 00:28:23.605 "read-only": true 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "name": "verbose_mode", 00:28:23.605 "value": true, 00:28:23.605 "unit": "", 00:28:23.605 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:23.605 }, 00:28:23.605 { 00:28:23.605 "name": "prep_upgrade_on_shutdown", 00:28:23.605 "value": true, 00:28:23.605 "unit": "", 00:28:23.605 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:23.605 } 00:28:23.605 ] 00:28:23.605 } 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80955 ]] 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80955 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80955 ']' 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80955 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80955 00:28:23.605 killing process with pid 80955 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80955' 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80955 00:28:23.605 04:48:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80955 00:28:24.550 [2024-11-03 04:48:47.405726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:24.550 [2024-11-03 04:48:47.421053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.550 [2024-11-03 04:48:47.421111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:24.550 [2024-11-03 04:48:47.421126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:24.550 [2024-11-03 04:48:47.421135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.550 [2024-11-03 04:48:47.421159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:24.550 [2024-11-03 04:48:47.424230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.550 [2024-11-03 04:48:47.424426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:24.550 [2024-11-03 04:48:47.424449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.055 ms 00:28:24.550 [2024-11-03 04:48:47.424457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-03 04:48:56.011447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-03 04:48:56.011841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:34.552 [2024-11-03 04:48:56.011870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8586.923 ms 00:28:34.552 [2024-11-03 04:48:56.011882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-03 04:48:56.013863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-03 04:48:56.013915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:34.552 [2024-11-03 04:48:56.013927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.955 ms 00:28:34.552 [2024-11-03 04:48:56.013936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-03 04:48:56.015102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-03 04:48:56.015247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:34.552 [2024-11-03 04:48:56.015264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 00:28:34.552 [2024-11-03 04:48:56.015273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-03 04:48:56.026750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-03 04:48:56.026799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:34.553 [2024-11-03 04:48:56.026811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.426 ms 00:28:34.553 [2024-11-03 04:48:56.026819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.034430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.034629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:34.553 [2024-11-03 04:48:56.034652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.561 ms 00:28:34.553 [2024-11-03 04:48:56.034662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.034827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.034840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:34.553 [2024-11-03 04:48:56.034851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:34.553 [2024-11-03 04:48:56.034860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.045508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.045554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:34.553 [2024-11-03 04:48:56.045581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.623 ms 00:28:34.553 [2024-11-03 04:48:56.045589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.056222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.056268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:34.553 [2024-11-03 04:48:56.056279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.586 ms 00:28:34.553 [2024-11-03 04:48:56.056286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.066578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.066623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:34.553 [2024-11-03 04:48:56.066634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.244 ms 00:28:34.553 [2024-11-03 04:48:56.066642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.076912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.076957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:34.553 [2024-11-03 04:48:56.076967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.171 ms 00:28:34.553 [2024-11-03 04:48:56.076974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.077021] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:34.553 [2024-11-03 04:48:56.077036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:34.553 [2024-11-03 04:48:56.077048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:34.553 [2024-11-03 04:48:56.077068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:34.553 [2024-11-03 04:48:56.077077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:34.553 [2024-11-03 04:48:56.077201] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:34.553 [2024-11-03 04:48:56.077209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2ea45cbb-4852-4dab-a4cc-8dee0eeeccb5 00:28:34.553 [2024-11-03 04:48:56.077218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:34.553 [2024-11-03 04:48:56.077226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:34.553 [2024-11-03 04:48:56.077233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:34.553 [2024-11-03 04:48:56.077241] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:34.553 [2024-11-03 04:48:56.077249] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:34.553 [2024-11-03 04:48:56.077257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:34.553 [2024-11-03 04:48:56.077265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:34.553 [2024-11-03 04:48:56.077272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:34.553 [2024-11-03 04:48:56.077281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:34.553 [2024-11-03 04:48:56.077290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.077302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:34.553 [2024-11-03 04:48:56.077314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:28:34.553 [2024-11-03 04:48:56.077322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.091283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.091326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:34.553 [2024-11-03 04:48:56.091338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.914 ms 00:28:34.553 [2024-11-03 04:48:56.091346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.091787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.553 [2024-11-03 04:48:56.091806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:34.553 [2024-11-03 04:48:56.091817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:28:34.553 [2024-11-03 04:48:56.091825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.138265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.138320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:34.553 [2024-11-03 04:48:56.138332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.138341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.138386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.138395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:34.553 [2024-11-03 04:48:56.138403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.138412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.138493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.138504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:34.553 [2024-11-03 04:48:56.138514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.138522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.138540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.138553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:34.553 [2024-11-03 04:48:56.138586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.138595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.221983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.222159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:34.553 [2024-11-03 04:48:56.222176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.222184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.286195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.286234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:34.553 [2024-11-03 04:48:56.286244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.286252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.286327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.286337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:34.553 [2024-11-03 04:48:56.286345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.286352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.286392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.286401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:34.553 [2024-11-03 04:48:56.286413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.286420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.286504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.286513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:34.553 [2024-11-03 04:48:56.286521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.286528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.553 [2024-11-03 04:48:56.286556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.553 [2024-11-03 04:48:56.286588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:34.553 [2024-11-03 04:48:56.286596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.553 [2024-11-03 04:48:56.286606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.554 [2024-11-03 04:48:56.286641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.554 [2024-11-03 04:48:56.286650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:34.554 [2024-11-03 04:48:56.286657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.554 [2024-11-03 04:48:56.286664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.554 [2024-11-03 04:48:56.286705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:34.554 [2024-11-03 04:48:56.286715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:34.554 [2024-11-03 04:48:56.286726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:34.554 [2024-11-03 04:48:56.286733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.554 [2024-11-03 04:48:56.286844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8865.751 ms, result 0 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:39.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81476 00:28:39.842 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81476 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81476 ']' 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.843 04:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.843 [2024-11-03 04:49:02.218622] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:39.843 [2024-11-03 04:49:02.218750] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81476 ] 00:28:39.843 [2024-11-03 04:49:02.378452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.843 [2024-11-03 04:49:02.479659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.416 [2024-11-03 04:49:03.224906] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:40.416 [2024-11-03 04:49:03.224995] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:40.416 [2024-11-03 04:49:03.379097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.416 [2024-11-03 04:49:03.379345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:40.416 [2024-11-03 04:49:03.379373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:40.416 [2024-11-03 04:49:03.379383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.416 [2024-11-03 04:49:03.379461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.416 [2024-11-03 04:49:03.379474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:40.416 [2024-11-03 04:49:03.379484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:40.416 [2024-11-03 04:49:03.379494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.416 [2024-11-03 04:49:03.379523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:40.416 [2024-11-03 04:49:03.380301] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:40.416 [2024-11-03 04:49:03.380332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.416 [2024-11-03 04:49:03.380341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:40.416 [2024-11-03 04:49:03.380351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.819 ms 00:28:40.416 [2024-11-03 04:49:03.380360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.382113] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:40.417 [2024-11-03 04:49:03.396793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.397022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:40.417 [2024-11-03 04:49:03.397048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.683 ms 00:28:40.417 [2024-11-03 04:49:03.397064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.397234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.397265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:40.417 [2024-11-03 04:49:03.397276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:40.417 [2024-11-03 04:49:03.397285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.405789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.405842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:40.417 [2024-11-03 04:49:03.405854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.408 ms 00:28:40.417 [2024-11-03 04:49:03.405862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.405933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.405944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:40.417 [2024-11-03 04:49:03.405954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:28:40.417 [2024-11-03 04:49:03.405962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.406010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.406020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:40.417 [2024-11-03 04:49:03.406030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:40.417 [2024-11-03 04:49:03.406042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.406069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:40.417 [2024-11-03 04:49:03.410311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.410354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:40.417 [2024-11-03 04:49:03.410365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.247 ms 00:28:40.417 [2024-11-03 04:49:03.410373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.410407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.410416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:40.417 [2024-11-03 04:49:03.410426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:40.417 [2024-11-03 04:49:03.410434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.410493] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:40.417 [2024-11-03 04:49:03.410517] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:40.417 [2024-11-03 04:49:03.410579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:40.417 [2024-11-03 04:49:03.410598] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:40.417 [2024-11-03 04:49:03.410704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:40.417 [2024-11-03 04:49:03.410720] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:40.417 [2024-11-03 04:49:03.410732] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:40.417 [2024-11-03 04:49:03.410744] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:40.417 [2024-11-03 04:49:03.410753] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:40.417 [2024-11-03 04:49:03.410762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:40.417 [2024-11-03 04:49:03.410775] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:40.417 [2024-11-03 04:49:03.410784] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:40.417 [2024-11-03 04:49:03.410793] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:40.417 [2024-11-03 04:49:03.410801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.410809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:40.417 [2024-11-03 04:49:03.410818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:28:40.417 [2024-11-03 04:49:03.410825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.410910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.417 [2024-11-03 04:49:03.410921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:40.417 [2024-11-03 04:49:03.410930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:40.417 [2024-11-03 04:49:03.410942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.417 [2024-11-03 04:49:03.411048] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:40.417 [2024-11-03 04:49:03.411062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:40.417 [2024-11-03 04:49:03.411072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:40.417 [2024-11-03 04:49:03.411080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:40.417 [2024-11-03 04:49:03.411098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:40.417 [2024-11-03 04:49:03.411115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:40.417 [2024-11-03 04:49:03.411123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:40.417 [2024-11-03 04:49:03.411130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:40.417 [2024-11-03 04:49:03.411145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:40.417 [2024-11-03 04:49:03.411152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:40.417 [2024-11-03 04:49:03.411170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:40.417 [2024-11-03 04:49:03.411177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:40.417 [2024-11-03 04:49:03.411190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:40.417 [2024-11-03 04:49:03.411198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:40.417 [2024-11-03 04:49:03.411211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:40.417 [2024-11-03 04:49:03.411218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:40.417 [2024-11-03 04:49:03.411224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:40.417 [2024-11-03 04:49:03.411231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:40.417 [2024-11-03 04:49:03.411239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:40.417 [2024-11-03 04:49:03.411254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:40.417 [2024-11-03 04:49:03.411261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:40.417 [2024-11-03 04:49:03.411268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:40.417 [2024-11-03 04:49:03.411275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:40.417 [2024-11-03 04:49:03.411282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:40.417 [2024-11-03 04:49:03.411290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:40.417 [2024-11-03 04:49:03.411298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:40.417 [2024-11-03 04:49:03.411307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:40.417 [2024-11-03 04:49:03.411314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:40.417 [2024-11-03 04:49:03.411329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:40.417 [2024-11-03 04:49:03.411336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:40.417 [2024-11-03 04:49:03.411354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.417 [2024-11-03 04:49:03.411371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:40.417 [2024-11-03 04:49:03.411377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:40.418 [2024-11-03 04:49:03.411385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.418 [2024-11-03 04:49:03.411391] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:40.418 [2024-11-03 04:49:03.411400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:40.418 [2024-11-03 04:49:03.411412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:40.418 [2024-11-03 04:49:03.411420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:40.418 [2024-11-03 04:49:03.411429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:40.418 [2024-11-03 04:49:03.411437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:40.418 [2024-11-03 04:49:03.411444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:40.418 [2024-11-03 04:49:03.411452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:40.418 [2024-11-03 04:49:03.411458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:40.418 [2024-11-03 04:49:03.411466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:40.418 [2024-11-03 04:49:03.411474] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:40.418 [2024-11-03 04:49:03.411488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:40.418 [2024-11-03 04:49:03.411505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:40.418 [2024-11-03 04:49:03.411532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:40.418 [2024-11-03 04:49:03.411544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:40.418 [2024-11-03 04:49:03.411553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:40.418 [2024-11-03 04:49:03.411577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:40.418 [2024-11-03 04:49:03.411634] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:40.418 [2024-11-03 04:49:03.411646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:40.418 [2024-11-03 04:49:03.411662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:40.418 [2024-11-03 04:49:03.411671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:40.418 [2024-11-03 04:49:03.411682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:40.418 [2024-11-03 04:49:03.411690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.418 [2024-11-03 04:49:03.411699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:40.418 [2024-11-03 04:49:03.411715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:28:40.418 [2024-11-03 04:49:03.411724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.418 [2024-11-03 04:49:03.411768] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:40.418 [2024-11-03 04:49:03.412377] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:44.631 [2024-11-03 04:49:07.412777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.413103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:44.631 [2024-11-03 04:49:07.413331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4000.992 ms 00:28:44.631 [2024-11-03 04:49:07.413378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.445512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.445753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:44.631 [2024-11-03 04:49:07.445837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.845 ms 00:28:44.631 [2024-11-03 04:49:07.445864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.446363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.446425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:44.631 [2024-11-03 04:49:07.446823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:44.631 [2024-11-03 04:49:07.446878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.482863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.483058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:44.631 [2024-11-03 04:49:07.483136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.882 ms 00:28:44.631 [2024-11-03 04:49:07.483161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.483228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.483254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:44.631 [2024-11-03 04:49:07.483277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:44.631 [2024-11-03 04:49:07.483297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.484007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.484155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:44.631 [2024-11-03 04:49:07.484213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:28:44.631 [2024-11-03 04:49:07.484237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.484314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.484338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:44.631 [2024-11-03 04:49:07.484360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:44.631 [2024-11-03 04:49:07.484425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.502301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.502469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:44.631 [2024-11-03 04:49:07.502529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.832 ms 00:28:44.631 [2024-11-03 04:49:07.502552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.517122] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:44.631 [2024-11-03 04:49:07.517301] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:44.631 [2024-11-03 04:49:07.517367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.517389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:44.631 [2024-11-03 04:49:07.517410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.646 ms 00:28:44.631 [2024-11-03 04:49:07.517429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.532154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.532307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:44.631 [2024-11-03 04:49:07.532365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.670 ms 00:28:44.631 [2024-11-03 04:49:07.532377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.544965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.545011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:44.631 [2024-11-03 04:49:07.545025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.542 ms 00:28:44.631 [2024-11-03 04:49:07.545034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.557704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.557748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:44.631 [2024-11-03 04:49:07.557760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.617 ms 00:28:44.631 [2024-11-03 04:49:07.557768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.558435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.631 [2024-11-03 04:49:07.558463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:44.631 [2024-11-03 04:49:07.558479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:28:44.631 [2024-11-03 04:49:07.558488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.631 [2024-11-03 04:49:07.636138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.636215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:44.632 [2024-11-03 04:49:07.636234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 77.626 ms 00:28:44.632 [2024-11-03 04:49:07.636243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.647629] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:44.632 [2024-11-03 04:49:07.648781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.648927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:44.632 [2024-11-03 04:49:07.648987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.476 ms 00:28:44.632 [2024-11-03 04:49:07.648999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.649097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.649109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:44.632 [2024-11-03 04:49:07.649125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:44.632 [2024-11-03 04:49:07.649134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.649198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.649210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:44.632 [2024-11-03 04:49:07.649220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:44.632 [2024-11-03 04:49:07.649228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.649252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.649261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:44.632 [2024-11-03 04:49:07.649271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:44.632 [2024-11-03 04:49:07.649285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.649323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:44.632 [2024-11-03 04:49:07.649334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.649343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:44.632 [2024-11-03 04:49:07.649352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:44.632 [2024-11-03 04:49:07.649360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.674956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.675126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:44.632 [2024-11-03 04:49:07.675155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.572 ms 00:28:44.632 [2024-11-03 04:49:07.675164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.675250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.632 [2024-11-03 04:49:07.675261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:44.632 [2024-11-03 04:49:07.675271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:44.632 [2024-11-03 04:49:07.675280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.632 [2024-11-03 04:49:07.677832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4298.193 ms, result 0 00:28:44.632 [2024-11-03 04:49:07.691520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.632 [2024-11-03 04:49:07.707514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:44.893 [2024-11-03 04:49:07.715733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:45.155 04:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:45.155 04:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:45.155 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:45.155 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:45.155 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:45.417 [2024-11-03 04:49:08.400003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.417 [2024-11-03 04:49:08.400055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:45.417 [2024-11-03 04:49:08.400069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:45.417 [2024-11-03 04:49:08.400078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.417 [2024-11-03 04:49:08.400106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.417 [2024-11-03 04:49:08.400116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:45.417 [2024-11-03 04:49:08.400125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:45.417 [2024-11-03 04:49:08.400134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.417 [2024-11-03 04:49:08.400154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.417 [2024-11-03 04:49:08.400163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:45.417 [2024-11-03 04:49:08.400172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:45.417 [2024-11-03 04:49:08.400180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.417 [2024-11-03 04:49:08.400243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.225 ms, result 0 00:28:45.417 true 00:28:45.417 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:45.679 { 00:28:45.679 "name": "ftl", 00:28:45.679 "properties": [ 00:28:45.679 { 00:28:45.679 "name": "superblock_version", 00:28:45.679 "value": 5, 00:28:45.679 "read-only": true 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "name": "base_device", 00:28:45.679 "bands": [ 00:28:45.679 { 00:28:45.679 "id": 0, 00:28:45.679 "state": "CLOSED", 00:28:45.679 "validity": 1.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 1, 00:28:45.679 "state": "CLOSED", 00:28:45.679 "validity": 1.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 2, 00:28:45.679 "state": "CLOSED", 00:28:45.679 "validity": 0.007843137254901933 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 3, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 4, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 5, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 6, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 7, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 8, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 9, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 10, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 11, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 12, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 13, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 14, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 15, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 16, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 }, 00:28:45.679 { 00:28:45.679 "id": 17, 00:28:45.679 "state": "FREE", 00:28:45.679 "validity": 0.0 00:28:45.679 } 00:28:45.679 ], 00:28:45.679 "read-only": true 00:28:45.679 }, 00:28:45.679 { 00:28:45.680 "name": "cache_device", 00:28:45.680 "type": "bdev", 00:28:45.680 "chunks": [ 00:28:45.680 { 00:28:45.680 "id": 0, 00:28:45.680 "state": "INACTIVE", 00:28:45.680 "utilization": 0.0 00:28:45.680 }, 00:28:45.680 { 00:28:45.680 "id": 1, 00:28:45.680 "state": "OPEN", 00:28:45.680 "utilization": 0.0 00:28:45.680 }, 00:28:45.680 { 00:28:45.680 "id": 2, 00:28:45.680 "state": "OPEN", 00:28:45.680 "utilization": 0.0 00:28:45.680 }, 00:28:45.680 { 00:28:45.680 "id": 3, 00:28:45.680 "state": "FREE", 00:28:45.680 "utilization": 0.0 00:28:45.680 }, 00:28:45.680 { 00:28:45.680 "id": 4, 00:28:45.680 "state": "FREE", 00:28:45.680 "utilization": 0.0 00:28:45.680 } 00:28:45.680 ], 00:28:45.680 "read-only": true 00:28:45.680 }, 00:28:45.680 { 00:28:45.680 "name": "verbose_mode", 00:28:45.680 "value": true, 00:28:45.680 "unit": "", 00:28:45.680 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:45.680 }, 00:28:45.680 { 00:28:45.680 "name": "prep_upgrade_on_shutdown", 00:28:45.680 "value": false, 00:28:45.680 "unit": "", 00:28:45.680 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:45.680 } 00:28:45.680 ] 00:28:45.680 } 00:28:45.680 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:45.680 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:45.680 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:45.941 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:45.941 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:45.941 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:45.941 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:45.941 04:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:46.203 Validate MD5 checksum, iteration 1 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:46.203 04:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:46.203 [2024-11-03 04:49:09.168998] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:46.203 [2024-11-03 04:49:09.169167] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81570 ] 00:28:46.464 [2024-11-03 04:49:09.339210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.464 [2024-11-03 04:49:09.482389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.385  [2024-11-03T04:49:12.055Z] Copying: 508/1024 [MB] (508 MBps) [2024-11-03T04:49:13.440Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:28:50.356 00:28:50.356 04:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:50.356 04:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:52.267 Validate MD5 checksum, iteration 2 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2ebc502cf140878a7c7c7822015334da 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ebc502cf140878a7c7c7822015334da != \2\e\b\c\5\0\2\c\f\1\4\0\8\7\8\a\7\c\7\c\7\8\2\2\0\1\5\3\3\4\d\a ]] 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:52.267 04:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.526 [2024-11-03 04:49:15.354812] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:52.526 [2024-11-03 04:49:15.354938] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81638 ] 00:28:52.526 [2024-11-03 04:49:15.513877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.526 [2024-11-03 04:49:15.601176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.434  [2024-11-03T04:49:17.778Z] Copying: 684/1024 [MB] (684 MBps) [2024-11-03T04:49:18.721Z] Copying: 1024/1024 [MB] (average 668 MBps) 00:28:55.637 00:28:55.637 04:49:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:55.637 04:49:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ce53b96b1941889079d62b4b08072eec 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ce53b96b1941889079d62b4b08072eec != \c\e\5\3\b\9\6\b\1\9\4\1\8\8\9\0\7\9\d\6\2\b\4\b\0\8\0\7\2\e\e\c ]] 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81476 ]] 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81476 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81699 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81699 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81699 ']' 00:28:57.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:57.546 04:49:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.806 [2024-11-03 04:49:20.645755] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:57.806 [2024-11-03 04:49:20.646008] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81699 ] 00:28:57.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81476 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:57.806 [2024-11-03 04:49:20.798760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.806 [2024-11-03 04:49:20.875861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.373 [2024-11-03 04:49:21.437298] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:58.373 [2024-11-03 04:49:21.437501] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:58.634 [2024-11-03 04:49:21.580728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.581087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:58.634 [2024-11-03 04:49:21.581140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:58.634 [2024-11-03 04:49:21.581165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.581245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.581256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:58.634 [2024-11-03 04:49:21.581264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:58.634 [2024-11-03 04:49:21.581271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.581297] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:58.634 [2024-11-03 04:49:21.581980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:58.634 [2024-11-03 04:49:21.581996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.582004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:58.634 [2024-11-03 04:49:21.582013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.708 ms 00:28:58.634 [2024-11-03 04:49:21.582020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.582324] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:58.634 [2024-11-03 04:49:21.598262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.598296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:58.634 [2024-11-03 04:49:21.598308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.940 ms 00:28:58.634 [2024-11-03 04:49:21.598315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.607175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.607206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:58.634 [2024-11-03 04:49:21.607219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:58.634 [2024-11-03 04:49:21.607226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.607533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.607544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:58.634 [2024-11-03 04:49:21.607552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:28:58.634 [2024-11-03 04:49:21.607587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.607634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.607645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:58.634 [2024-11-03 04:49:21.607653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:58.634 [2024-11-03 04:49:21.607661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.607685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.607693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:58.634 [2024-11-03 04:49:21.607700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:58.634 [2024-11-03 04:49:21.607707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.634 [2024-11-03 04:49:21.607726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:58.634 [2024-11-03 04:49:21.610786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.634 [2024-11-03 04:49:21.610813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:58.634 [2024-11-03 04:49:21.610822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.064 ms 00:28:58.634 [2024-11-03 04:49:21.610829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.635 [2024-11-03 04:49:21.610855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.635 [2024-11-03 04:49:21.610865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:58.635 [2024-11-03 04:49:21.610873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:58.635 [2024-11-03 04:49:21.610880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.635 [2024-11-03 04:49:21.610898] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:58.635 [2024-11-03 04:49:21.610915] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:58.635 [2024-11-03 04:49:21.610948] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:58.635 [2024-11-03 04:49:21.610962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:58.635 [2024-11-03 04:49:21.611065] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:58.635 [2024-11-03 04:49:21.611075] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:58.635 [2024-11-03 04:49:21.611085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:58.635 [2024-11-03 04:49:21.611094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611102] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611110] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:58.635 [2024-11-03 04:49:21.611117] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:58.635 [2024-11-03 04:49:21.611124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:58.635 [2024-11-03 04:49:21.611131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:58.635 [2024-11-03 04:49:21.611138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.635 [2024-11-03 04:49:21.611145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:58.635 [2024-11-03 04:49:21.611154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.241 ms 00:28:58.635 [2024-11-03 04:49:21.611162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.635 [2024-11-03 04:49:21.611245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.635 [2024-11-03 04:49:21.611252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:58.635 [2024-11-03 04:49:21.611259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:28:58.635 [2024-11-03 04:49:21.611266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.635 [2024-11-03 04:49:21.611377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:58.635 [2024-11-03 04:49:21.611387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:58.635 [2024-11-03 04:49:21.611395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:58.635 [2024-11-03 04:49:21.611419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:58.635 [2024-11-03 04:49:21.611432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:58.635 [2024-11-03 04:49:21.611440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:58.635 [2024-11-03 04:49:21.611446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:58.635 [2024-11-03 04:49:21.611459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:58.635 [2024-11-03 04:49:21.611465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:58.635 [2024-11-03 04:49:21.611479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:58.635 [2024-11-03 04:49:21.611489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:58.635 [2024-11-03 04:49:21.611501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:58.635 [2024-11-03 04:49:21.611508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:58.635 [2024-11-03 04:49:21.611521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:58.635 [2024-11-03 04:49:21.611527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:58.635 [2024-11-03 04:49:21.611546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:58.635 [2024-11-03 04:49:21.611552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:58.635 [2024-11-03 04:49:21.611582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:58.635 [2024-11-03 04:49:21.611588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:58.635 [2024-11-03 04:49:21.611601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:58.635 [2024-11-03 04:49:21.611608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:58.635 [2024-11-03 04:49:21.611620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:58.635 [2024-11-03 04:49:21.611626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:58.635 [2024-11-03 04:49:21.611639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:58.635 [2024-11-03 04:49:21.611659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:58.635 [2024-11-03 04:49:21.611678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:58.635 [2024-11-03 04:49:21.611684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611690] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:58.635 [2024-11-03 04:49:21.611698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:58.635 [2024-11-03 04:49:21.611705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:58.635 [2024-11-03 04:49:21.611721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:58.635 [2024-11-03 04:49:21.611728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:58.635 [2024-11-03 04:49:21.611735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:58.635 [2024-11-03 04:49:21.611741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:58.635 [2024-11-03 04:49:21.611747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:58.635 [2024-11-03 04:49:21.611753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:58.635 [2024-11-03 04:49:21.611762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:58.635 [2024-11-03 04:49:21.611770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.635 [2024-11-03 04:49:21.611779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:58.635 [2024-11-03 04:49:21.611786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:58.635 [2024-11-03 04:49:21.611793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:58.635 [2024-11-03 04:49:21.611800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:58.635 [2024-11-03 04:49:21.611806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:58.635 [2024-11-03 04:49:21.611813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:58.635 [2024-11-03 04:49:21.611820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:58.635 [2024-11-03 04:49:21.611827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:58.635 [2024-11-03 04:49:21.611833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:58.635 [2024-11-03 04:49:21.611840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:58.635 [2024-11-03 04:49:21.611847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:58.636 [2024-11-03 04:49:21.611853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:58.636 [2024-11-03 04:49:21.611860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:58.636 [2024-11-03 04:49:21.611867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:58.636 [2024-11-03 04:49:21.611873] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:58.636 [2024-11-03 04:49:21.611881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.636 [2024-11-03 04:49:21.611889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:58.636 [2024-11-03 04:49:21.611896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:58.636 [2024-11-03 04:49:21.611903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:58.636 [2024-11-03 04:49:21.611909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:58.636 [2024-11-03 04:49:21.611916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.611926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:58.636 [2024-11-03 04:49:21.611933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:28:58.636 [2024-11-03 04:49:21.611940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.635698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.635833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:58.636 [2024-11-03 04:49:21.635850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.708 ms 00:28:58.636 [2024-11-03 04:49:21.635858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.635896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.635904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:58.636 [2024-11-03 04:49:21.635911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:58.636 [2024-11-03 04:49:21.635919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.666427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.666460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:58.636 [2024-11-03 04:49:21.666470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.457 ms 00:28:58.636 [2024-11-03 04:49:21.666478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.666503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.666511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:58.636 [2024-11-03 04:49:21.666519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:58.636 [2024-11-03 04:49:21.666526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.666631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.666641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:58.636 [2024-11-03 04:49:21.666650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:58.636 [2024-11-03 04:49:21.666657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.666706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.666717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:58.636 [2024-11-03 04:49:21.666725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:58.636 [2024-11-03 04:49:21.666732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.680965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.680996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:58.636 [2024-11-03 04:49:21.681006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.211 ms 00:28:58.636 [2024-11-03 04:49:21.681013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.681117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.681128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:58.636 [2024-11-03 04:49:21.681136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:58.636 [2024-11-03 04:49:21.681143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.636 [2024-11-03 04:49:21.712104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.636 [2024-11-03 04:49:21.712284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:58.636 [2024-11-03 04:49:21.712311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.941 ms 00:28:58.636 [2024-11-03 04:49:21.712323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.722047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.722077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:58.898 [2024-11-03 04:49:21.722087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:28:58.898 [2024-11-03 04:49:21.722102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.777888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.778048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:58.898 [2024-11-03 04:49:21.778071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.733 ms 00:28:58.898 [2024-11-03 04:49:21.778080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.778201] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:58.898 [2024-11-03 04:49:21.778291] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:58.898 [2024-11-03 04:49:21.778377] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:58.898 [2024-11-03 04:49:21.778464] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:58.898 [2024-11-03 04:49:21.778472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.778481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:58.898 [2024-11-03 04:49:21.778490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:28:58.898 [2024-11-03 04:49:21.778497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.778552] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:58.898 [2024-11-03 04:49:21.778583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.778591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:58.898 [2024-11-03 04:49:21.778603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:58.898 [2024-11-03 04:49:21.778611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.793686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.793718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:58.898 [2024-11-03 04:49:21.793733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.055 ms 00:28:58.898 [2024-11-03 04:49:21.793741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.802122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.802164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:58.898 [2024-11-03 04:49:21.802175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:58.898 [2024-11-03 04:49:21.802182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.898 [2024-11-03 04:49:21.802267] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:58.898 [2024-11-03 04:49:21.802398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.898 [2024-11-03 04:49:21.802412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:58.898 [2024-11-03 04:49:21.802420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.133 ms 00:28:58.898 [2024-11-03 04:49:21.802427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.842 [2024-11-03 04:49:22.612772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.842 [2024-11-03 04:49:22.613115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:59.842 [2024-11-03 04:49:22.613144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 809.542 ms 00:28:59.842 [2024-11-03 04:49:22.613154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.842 [2024-11-03 04:49:22.618349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.842 [2024-11-03 04:49:22.618403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:59.842 [2024-11-03 04:49:22.618417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.912 ms 00:28:59.842 [2024-11-03 04:49:22.618426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.842 [2024-11-03 04:49:22.619476] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:59.842 [2024-11-03 04:49:22.619531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.842 [2024-11-03 04:49:22.619541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:59.842 [2024-11-03 04:49:22.619552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:28:59.842 [2024-11-03 04:49:22.619585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.842 [2024-11-03 04:49:22.619625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.842 [2024-11-03 04:49:22.619637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:59.842 [2024-11-03 04:49:22.619647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:59.842 [2024-11-03 04:49:22.619655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.842 [2024-11-03 04:49:22.619697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 817.424 ms, result 0 00:28:59.842 [2024-11-03 04:49:22.619741] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:59.842 [2024-11-03 04:49:22.619867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.842 [2024-11-03 04:49:22.619880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:59.842 [2024-11-03 04:49:22.619889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:28:59.842 [2024-11-03 04:49:22.619897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.278933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.278979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:00.415 [2024-11-03 04:49:23.278989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 657.812 ms 00:29:00.415 [2024-11-03 04:49:23.278996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.282379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.282407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:00.415 [2024-11-03 04:49:23.282416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.928 ms 00:29:00.415 [2024-11-03 04:49:23.282422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.282814] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:00.415 [2024-11-03 04:49:23.282839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.282846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:00.415 [2024-11-03 04:49:23.282853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:29:00.415 [2024-11-03 04:49:23.282859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.282875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.282882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:00.415 [2024-11-03 04:49:23.282888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:00.415 [2024-11-03 04:49:23.282894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.282924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 663.183 ms, result 0 00:29:00.415 [2024-11-03 04:49:23.282957] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:00.415 [2024-11-03 04:49:23.282965] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:00.415 [2024-11-03 04:49:23.282973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.282979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:00.415 [2024-11-03 04:49:23.282986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1480.725 ms 00:29:00.415 [2024-11-03 04:49:23.282992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.283016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.283023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:00.415 [2024-11-03 04:49:23.283032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.415 [2024-11-03 04:49:23.283039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.291599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:00.415 [2024-11-03 04:49:23.291686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.291695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:00.415 [2024-11-03 04:49:23.291703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.635 ms 00:29:00.415 [2024-11-03 04:49:23.291709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.292231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.292252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:00.415 [2024-11-03 04:49:23.292259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.472 ms 00:29:00.415 [2024-11-03 04:49:23.292266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.293951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.293969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:00.415 [2024-11-03 04:49:23.293977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.672 ms 00:29:00.415 [2024-11-03 04:49:23.293983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.415 [2024-11-03 04:49:23.294013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.415 [2024-11-03 04:49:23.294020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:00.415 [2024-11-03 04:49:23.294027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.415 [2024-11-03 04:49:23.294032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.416 [2024-11-03 04:49:23.294111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.416 [2024-11-03 04:49:23.294119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:00.416 [2024-11-03 04:49:23.294125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:00.416 [2024-11-03 04:49:23.294131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.416 [2024-11-03 04:49:23.294146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.416 [2024-11-03 04:49:23.294152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:00.416 [2024-11-03 04:49:23.294159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:00.416 [2024-11-03 04:49:23.294164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.416 [2024-11-03 04:49:23.294187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:00.416 [2024-11-03 04:49:23.294196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.416 [2024-11-03 04:49:23.294202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:00.416 [2024-11-03 04:49:23.294208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:00.416 [2024-11-03 04:49:23.294213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.416 [2024-11-03 04:49:23.294251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.416 [2024-11-03 04:49:23.294257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:00.416 [2024-11-03 04:49:23.294263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:00.416 [2024-11-03 04:49:23.294268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.416 [2024-11-03 04:49:23.294982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1714.053 ms, result 0 00:29:00.416 [2024-11-03 04:49:23.307923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.416 [2024-11-03 04:49:23.323923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:00.416 [2024-11-03 04:49:23.332027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:00.416 Validate MD5 checksum, iteration 1 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:00.416 04:49:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:00.416 [2024-11-03 04:49:23.417930] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:29:00.416 [2024-11-03 04:49:23.418022] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81738 ] 00:29:00.675 [2024-11-03 04:49:23.574760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.675 [2024-11-03 04:49:23.679973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.588  [2024-11-03T04:49:26.245Z] Copying: 538/1024 [MB] (538 MBps) [2024-11-03T04:49:31.514Z] Copying: 1024/1024 [MB] (average 576 MBps) 00:29:08.430 00:29:08.430 04:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:08.430 04:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:10.376 Validate MD5 checksum, iteration 2 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2ebc502cf140878a7c7c7822015334da 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2ebc502cf140878a7c7c7822015334da != \2\e\b\c\5\0\2\c\f\1\4\0\8\7\8\a\7\c\7\c\7\8\2\2\0\1\5\3\3\4\d\a ]] 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:10.376 04:49:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:10.376 [2024-11-03 04:49:33.124225] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:29:10.376 [2024-11-03 04:49:33.124339] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81846 ] 00:29:10.376 [2024-11-03 04:49:33.278989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.376 [2024-11-03 04:49:33.364677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.760  [2024-11-03T04:49:35.787Z] Copying: 618/1024 [MB] (618 MBps) [2024-11-03T04:49:36.731Z] Copying: 1024/1024 [MB] (average 623 MBps) 00:29:13.647 00:29:13.647 04:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:13.647 04:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ce53b96b1941889079d62b4b08072eec 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ce53b96b1941889079d62b4b08072eec != \c\e\5\3\b\9\6\b\1\9\4\1\8\8\9\0\7\9\d\6\2\b\4\b\0\8\0\7\2\e\e\c ]] 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81699 ]] 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81699 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81699 ']' 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81699 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81699 00:29:15.550 killing process with pid 81699 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81699' 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81699 00:29:15.550 04:49:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81699 00:29:16.118 [2024-11-03 04:49:38.897477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:16.118 [2024-11-03 04:49:38.908848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.908882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:16.118 [2024-11-03 04:49:38.908892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:16.118 [2024-11-03 04:49:38.908899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.908915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:16.118 [2024-11-03 04:49:38.911037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.911062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:16.118 [2024-11-03 04:49:38.911070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.112 ms 00:29:16.118 [2024-11-03 04:49:38.911081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.911253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.911261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:16.118 [2024-11-03 04:49:38.911268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:29:16.118 [2024-11-03 04:49:38.911273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.912273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.912396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:16.118 [2024-11-03 04:49:38.912408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.989 ms 00:29:16.118 [2024-11-03 04:49:38.912415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.913311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.913328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:16.118 [2024-11-03 04:49:38.913336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.867 ms 00:29:16.118 [2024-11-03 04:49:38.913342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.920614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.920643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:16.118 [2024-11-03 04:49:38.920651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.245 ms 00:29:16.118 [2024-11-03 04:49:38.920657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.924795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.924821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:16.118 [2024-11-03 04:49:38.924829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.107 ms 00:29:16.118 [2024-11-03 04:49:38.924835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.924902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.924910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:16.118 [2024-11-03 04:49:38.924917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:16.118 [2024-11-03 04:49:38.924922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.932119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.932146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:16.118 [2024-11-03 04:49:38.932153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.185 ms 00:29:16.118 [2024-11-03 04:49:38.932159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.939288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.939393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:16.118 [2024-11-03 04:49:38.939405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.103 ms 00:29:16.118 [2024-11-03 04:49:38.939410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.946203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.946303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:16.118 [2024-11-03 04:49:38.946314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.770 ms 00:29:16.118 [2024-11-03 04:49:38.946319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.953285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.118 [2024-11-03 04:49:38.953381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:16.118 [2024-11-03 04:49:38.953392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.922 ms 00:29:16.118 [2024-11-03 04:49:38.953398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.118 [2024-11-03 04:49:38.953420] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:16.118 [2024-11-03 04:49:38.953434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:16.118 [2024-11-03 04:49:38.953442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:16.118 [2024-11-03 04:49:38.953448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:16.118 [2024-11-03 04:49:38.953454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:16.118 [2024-11-03 04:49:38.953460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:16.118 [2024-11-03 04:49:38.953466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:16.118 [2024-11-03 04:49:38.953472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:16.118 [2024-11-03 04:49:38.953477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:16.119 [2024-11-03 04:49:38.953540] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:16.119 [2024-11-03 04:49:38.953546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2ea45cbb-4852-4dab-a4cc-8dee0eeeccb5 00:29:16.119 [2024-11-03 04:49:38.953552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:16.119 [2024-11-03 04:49:38.953570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:16.119 [2024-11-03 04:49:38.953576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:16.119 [2024-11-03 04:49:38.953582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:16.119 [2024-11-03 04:49:38.953587] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:16.119 [2024-11-03 04:49:38.953593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:16.119 [2024-11-03 04:49:38.953599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:16.119 [2024-11-03 04:49:38.953604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:16.119 [2024-11-03 04:49:38.953609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:16.119 [2024-11-03 04:49:38.953614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.119 [2024-11-03 04:49:38.953621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:16.119 [2024-11-03 04:49:38.953631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:29:16.119 [2024-11-03 04:49:38.953637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:38.963208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.119 [2024-11-03 04:49:38.963233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:16.119 [2024-11-03 04:49:38.963242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.558 ms 00:29:16.119 [2024-11-03 04:49:38.963248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:38.963515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.119 [2024-11-03 04:49:38.963529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:16.119 [2024-11-03 04:49:38.963536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:29:16.119 [2024-11-03 04:49:38.963542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:38.996386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:38.996490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:16.119 [2024-11-03 04:49:38.996502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:38.996508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:38.996531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:38.996542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:16.119 [2024-11-03 04:49:38.996548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:38.996577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:38.996640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:38.996648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:16.119 [2024-11-03 04:49:38.996655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:38.996661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:38.996673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:38.996679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:16.119 [2024-11-03 04:49:38.996689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:38.996695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.055324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.055356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:16.119 [2024-11-03 04:49:39.055365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.055371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.102679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.102713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:16.119 [2024-11-03 04:49:39.102721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.102728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.102775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.102782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:16.119 [2024-11-03 04:49:39.102788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.102794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.102834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.102841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:16.119 [2024-11-03 04:49:39.102848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.102861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.102931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.102939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:16.119 [2024-11-03 04:49:39.102945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.102950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.102977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.102984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:16.119 [2024-11-03 04:49:39.102990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.102996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.103024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.103030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:16.119 [2024-11-03 04:49:39.103036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.103042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.103072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:16.119 [2024-11-03 04:49:39.103079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:16.119 [2024-11-03 04:49:39.103085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:16.119 [2024-11-03 04:49:39.103093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.119 [2024-11-03 04:49:39.103178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 194.309 ms, result 0 00:29:16.688 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.689 Remove shared memory files 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81476 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:16.689 ************************************ 00:29:16.689 END TEST ftl_upgrade_shutdown 00:29:16.689 ************************************ 00:29:16.689 00:29:16.689 real 1m26.255s 00:29:16.689 user 1m55.275s 00:29:16.689 sys 0m20.126s 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.689 04:49:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:16.957 04:49:39 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:16.957 04:49:39 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:16.957 04:49:39 ftl -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:29:16.957 04:49:39 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:16.957 04:49:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:16.957 ************************************ 00:29:16.957 START TEST ftl_restore_fast 00:29:16.957 ************************************ 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:16.957 * Looking for test storage... 00:29:16.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.957 --rc genhtml_branch_coverage=1 00:29:16.957 --rc genhtml_function_coverage=1 00:29:16.957 --rc genhtml_legend=1 00:29:16.957 --rc geninfo_all_blocks=1 00:29:16.957 --rc geninfo_unexecuted_blocks=1 00:29:16.957 00:29:16.957 ' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.957 --rc genhtml_branch_coverage=1 00:29:16.957 --rc genhtml_function_coverage=1 00:29:16.957 --rc genhtml_legend=1 00:29:16.957 --rc geninfo_all_blocks=1 00:29:16.957 --rc geninfo_unexecuted_blocks=1 00:29:16.957 00:29:16.957 ' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.957 --rc genhtml_branch_coverage=1 00:29:16.957 --rc genhtml_function_coverage=1 00:29:16.957 --rc genhtml_legend=1 00:29:16.957 --rc geninfo_all_blocks=1 00:29:16.957 --rc geninfo_unexecuted_blocks=1 00:29:16.957 00:29:16.957 ' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:16.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.957 --rc genhtml_branch_coverage=1 00:29:16.957 --rc genhtml_function_coverage=1 00:29:16.957 --rc genhtml_legend=1 00:29:16.957 --rc geninfo_all_blocks=1 00:29:16.957 --rc geninfo_unexecuted_blocks=1 00:29:16.957 00:29:16.957 ' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.957 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0x1pbT6I5B 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81993 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81993 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # '[' -z 81993 ']' 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.958 04:49:39 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.958 04:49:40 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:17.218 [2024-11-03 04:49:40.089020] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:29:17.218 [2024-11-03 04:49:40.089865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81993 ] 00:29:17.218 [2024-11-03 04:49:40.243420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.479 [2024-11-03 04:49:40.343544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@866 -- # return 0 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:18.052 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:29:18.314 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:18.576 { 00:29:18.576 "name": "nvme0n1", 00:29:18.576 "aliases": [ 00:29:18.576 "11e9b69f-eccf-4b20-aef3-e0d1a7745b56" 00:29:18.576 ], 00:29:18.576 "product_name": "NVMe disk", 00:29:18.576 "block_size": 4096, 00:29:18.576 "num_blocks": 1310720, 00:29:18.576 "uuid": "11e9b69f-eccf-4b20-aef3-e0d1a7745b56", 00:29:18.576 "numa_id": -1, 00:29:18.576 "assigned_rate_limits": { 00:29:18.576 "rw_ios_per_sec": 0, 00:29:18.576 "rw_mbytes_per_sec": 0, 00:29:18.576 "r_mbytes_per_sec": 0, 00:29:18.576 "w_mbytes_per_sec": 0 00:29:18.576 }, 00:29:18.576 "claimed": true, 00:29:18.576 "claim_type": "read_many_write_one", 00:29:18.576 "zoned": false, 00:29:18.576 "supported_io_types": { 00:29:18.576 "read": true, 00:29:18.576 "write": true, 00:29:18.576 "unmap": true, 00:29:18.576 "flush": true, 00:29:18.576 "reset": true, 00:29:18.576 "nvme_admin": true, 00:29:18.576 "nvme_io": true, 00:29:18.576 "nvme_io_md": false, 00:29:18.576 "write_zeroes": true, 00:29:18.576 "zcopy": false, 00:29:18.576 "get_zone_info": false, 00:29:18.576 "zone_management": false, 00:29:18.576 "zone_append": false, 00:29:18.576 "compare": true, 00:29:18.576 "compare_and_write": false, 00:29:18.576 "abort": true, 00:29:18.576 "seek_hole": false, 00:29:18.576 "seek_data": false, 00:29:18.576 "copy": true, 00:29:18.576 "nvme_iov_md": false 00:29:18.576 }, 00:29:18.576 "driver_specific": { 00:29:18.576 "nvme": [ 00:29:18.576 { 00:29:18.576 "pci_address": "0000:00:11.0", 00:29:18.576 "trid": { 00:29:18.576 "trtype": "PCIe", 00:29:18.576 "traddr": "0000:00:11.0" 00:29:18.576 }, 00:29:18.576 "ctrlr_data": { 00:29:18.576 "cntlid": 0, 00:29:18.576 "vendor_id": "0x1b36", 00:29:18.576 "model_number": "QEMU NVMe Ctrl", 00:29:18.576 "serial_number": "12341", 00:29:18.576 "firmware_revision": "8.0.0", 00:29:18.576 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:18.576 "oacs": { 00:29:18.576 "security": 0, 00:29:18.576 "format": 1, 00:29:18.576 "firmware": 0, 00:29:18.576 "ns_manage": 1 00:29:18.576 }, 00:29:18.576 "multi_ctrlr": false, 00:29:18.576 "ana_reporting": false 00:29:18.576 }, 00:29:18.576 "vs": { 00:29:18.576 "nvme_version": "1.4" 00:29:18.576 }, 00:29:18.576 "ns_data": { 00:29:18.576 "id": 1, 00:29:18.576 "can_share": false 00:29:18.576 } 00:29:18.576 } 00:29:18.576 ], 00:29:18.576 "mp_policy": "active_passive" 00:29:18.576 } 00:29:18.576 } 00:29:18.576 ]' 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=1310720 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:29:18.576 04:49:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 5120 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=0306e4b3-a171-45d7-b382-f6d3235984dd 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:18.838 04:49:41 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0306e4b3-a171-45d7-b382-f6d3235984dd 00:29:19.097 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:19.355 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=4fbf60bb-d817-490e-a2f0-08fc990e8c1f 00:29:19.355 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4fbf60bb-d817-490e-a2f0-08fc990e8c1f 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=798458e9-84c7-4558-93ff-b38d7f231c53 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=798458e9-84c7-4558-93ff-b38d7f231c53 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=798458e9-84c7-4558-93ff-b38d7f231c53 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:29:19.620 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:19.878 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:19.878 { 00:29:19.878 "name": "798458e9-84c7-4558-93ff-b38d7f231c53", 00:29:19.878 "aliases": [ 00:29:19.878 "lvs/nvme0n1p0" 00:29:19.878 ], 00:29:19.878 "product_name": "Logical Volume", 00:29:19.878 "block_size": 4096, 00:29:19.878 "num_blocks": 26476544, 00:29:19.878 "uuid": "798458e9-84c7-4558-93ff-b38d7f231c53", 00:29:19.878 "assigned_rate_limits": { 00:29:19.878 "rw_ios_per_sec": 0, 00:29:19.878 "rw_mbytes_per_sec": 0, 00:29:19.878 "r_mbytes_per_sec": 0, 00:29:19.878 "w_mbytes_per_sec": 0 00:29:19.878 }, 00:29:19.878 "claimed": false, 00:29:19.878 "zoned": false, 00:29:19.878 "supported_io_types": { 00:29:19.878 "read": true, 00:29:19.878 "write": true, 00:29:19.878 "unmap": true, 00:29:19.878 "flush": false, 00:29:19.878 "reset": true, 00:29:19.878 "nvme_admin": false, 00:29:19.878 "nvme_io": false, 00:29:19.878 "nvme_io_md": false, 00:29:19.878 "write_zeroes": true, 00:29:19.878 "zcopy": false, 00:29:19.878 "get_zone_info": false, 00:29:19.878 "zone_management": false, 00:29:19.878 "zone_append": false, 00:29:19.878 "compare": false, 00:29:19.878 "compare_and_write": false, 00:29:19.878 "abort": false, 00:29:19.878 "seek_hole": true, 00:29:19.878 "seek_data": true, 00:29:19.878 "copy": false, 00:29:19.878 "nvme_iov_md": false 00:29:19.879 }, 00:29:19.879 "driver_specific": { 00:29:19.879 "lvol": { 00:29:19.879 "lvol_store_uuid": "4fbf60bb-d817-490e-a2f0-08fc990e8c1f", 00:29:19.879 "base_bdev": "nvme0n1", 00:29:19.879 "thin_provision": true, 00:29:19.879 "num_allocated_clusters": 0, 00:29:19.879 "snapshot": false, 00:29:19.879 "clone": false, 00:29:19.879 "esnap_clone": false 00:29:19.879 } 00:29:19.879 } 00:29:19.879 } 00:29:19.879 ]' 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:19.879 04:49:42 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=798458e9-84c7-4558-93ff-b38d7f231c53 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:29:20.137 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:20.395 { 00:29:20.395 "name": "798458e9-84c7-4558-93ff-b38d7f231c53", 00:29:20.395 "aliases": [ 00:29:20.395 "lvs/nvme0n1p0" 00:29:20.395 ], 00:29:20.395 "product_name": "Logical Volume", 00:29:20.395 "block_size": 4096, 00:29:20.395 "num_blocks": 26476544, 00:29:20.395 "uuid": "798458e9-84c7-4558-93ff-b38d7f231c53", 00:29:20.395 "assigned_rate_limits": { 00:29:20.395 "rw_ios_per_sec": 0, 00:29:20.395 "rw_mbytes_per_sec": 0, 00:29:20.395 "r_mbytes_per_sec": 0, 00:29:20.395 "w_mbytes_per_sec": 0 00:29:20.395 }, 00:29:20.395 "claimed": false, 00:29:20.395 "zoned": false, 00:29:20.395 "supported_io_types": { 00:29:20.395 "read": true, 00:29:20.395 "write": true, 00:29:20.395 "unmap": true, 00:29:20.395 "flush": false, 00:29:20.395 "reset": true, 00:29:20.395 "nvme_admin": false, 00:29:20.395 "nvme_io": false, 00:29:20.395 "nvme_io_md": false, 00:29:20.395 "write_zeroes": true, 00:29:20.395 "zcopy": false, 00:29:20.395 "get_zone_info": false, 00:29:20.395 "zone_management": false, 00:29:20.395 "zone_append": false, 00:29:20.395 "compare": false, 00:29:20.395 "compare_and_write": false, 00:29:20.395 "abort": false, 00:29:20.395 "seek_hole": true, 00:29:20.395 "seek_data": true, 00:29:20.395 "copy": false, 00:29:20.395 "nvme_iov_md": false 00:29:20.395 }, 00:29:20.395 "driver_specific": { 00:29:20.395 "lvol": { 00:29:20.395 "lvol_store_uuid": "4fbf60bb-d817-490e-a2f0-08fc990e8c1f", 00:29:20.395 "base_bdev": "nvme0n1", 00:29:20.395 "thin_provision": true, 00:29:20.395 "num_allocated_clusters": 0, 00:29:20.395 "snapshot": false, 00:29:20.395 "clone": false, 00:29:20.395 "esnap_clone": false 00:29:20.395 } 00:29:20.395 } 00:29:20.395 } 00:29:20.395 ]' 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:20.395 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:20.396 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:20.396 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=798458e9-84c7-4558-93ff-b38d7f231c53 00:29:20.396 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:20.396 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:29:20.396 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:29:20.396 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 798458e9-84c7-4558-93ff-b38d7f231c53 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:20.654 { 00:29:20.654 "name": "798458e9-84c7-4558-93ff-b38d7f231c53", 00:29:20.654 "aliases": [ 00:29:20.654 "lvs/nvme0n1p0" 00:29:20.654 ], 00:29:20.654 "product_name": "Logical Volume", 00:29:20.654 "block_size": 4096, 00:29:20.654 "num_blocks": 26476544, 00:29:20.654 "uuid": "798458e9-84c7-4558-93ff-b38d7f231c53", 00:29:20.654 "assigned_rate_limits": { 00:29:20.654 "rw_ios_per_sec": 0, 00:29:20.654 "rw_mbytes_per_sec": 0, 00:29:20.654 "r_mbytes_per_sec": 0, 00:29:20.654 "w_mbytes_per_sec": 0 00:29:20.654 }, 00:29:20.654 "claimed": false, 00:29:20.654 "zoned": false, 00:29:20.654 "supported_io_types": { 00:29:20.654 "read": true, 00:29:20.654 "write": true, 00:29:20.654 "unmap": true, 00:29:20.654 "flush": false, 00:29:20.654 "reset": true, 00:29:20.654 "nvme_admin": false, 00:29:20.654 "nvme_io": false, 00:29:20.654 "nvme_io_md": false, 00:29:20.654 "write_zeroes": true, 00:29:20.654 "zcopy": false, 00:29:20.654 "get_zone_info": false, 00:29:20.654 "zone_management": false, 00:29:20.654 "zone_append": false, 00:29:20.654 "compare": false, 00:29:20.654 "compare_and_write": false, 00:29:20.654 "abort": false, 00:29:20.654 "seek_hole": true, 00:29:20.654 "seek_data": true, 00:29:20.654 "copy": false, 00:29:20.654 "nvme_iov_md": false 00:29:20.654 }, 00:29:20.654 "driver_specific": { 00:29:20.654 "lvol": { 00:29:20.654 "lvol_store_uuid": "4fbf60bb-d817-490e-a2f0-08fc990e8c1f", 00:29:20.654 "base_bdev": "nvme0n1", 00:29:20.654 "thin_provision": true, 00:29:20.654 "num_allocated_clusters": 0, 00:29:20.654 "snapshot": false, 00:29:20.654 "clone": false, 00:29:20.654 "esnap_clone": false 00:29:20.654 } 00:29:20.654 } 00:29:20.654 } 00:29:20.654 ]' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 798458e9-84c7-4558-93ff-b38d7f231c53 --l2p_dram_limit 10' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:20.654 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:20.655 04:49:43 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 798458e9-84c7-4558-93ff-b38d7f231c53 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:20.914 [2024-11-03 04:49:43.910444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.910485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:20.914 [2024-11-03 04:49:43.910499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:20.914 [2024-11-03 04:49:43.910506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.910552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.910573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:20.914 [2024-11-03 04:49:43.910582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:20.914 [2024-11-03 04:49:43.910590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.910610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:20.914 [2024-11-03 04:49:43.911231] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:20.914 [2024-11-03 04:49:43.911252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.911258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:20.914 [2024-11-03 04:49:43.911266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:29:20.914 [2024-11-03 04:49:43.911272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.911298] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3 00:29:20.914 [2024-11-03 04:49:43.912252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.912286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:20.914 [2024-11-03 04:49:43.912294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:20.914 [2024-11-03 04:49:43.912301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.917040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.917071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:20.914 [2024-11-03 04:49:43.917079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:29:20.914 [2024-11-03 04:49:43.917089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.917192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.917202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:20.914 [2024-11-03 04:49:43.917209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:20.914 [2024-11-03 04:49:43.917219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.917250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.917259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:20.914 [2024-11-03 04:49:43.917265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:20.914 [2024-11-03 04:49:43.917271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.917294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:20.914 [2024-11-03 04:49:43.920170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.920195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:20.914 [2024-11-03 04:49:43.920205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:29:20.914 [2024-11-03 04:49:43.920213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.920240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.920247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:20.914 [2024-11-03 04:49:43.920254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:20.914 [2024-11-03 04:49:43.920260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.920274] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:20.914 [2024-11-03 04:49:43.920377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:20.914 [2024-11-03 04:49:43.920389] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:20.914 [2024-11-03 04:49:43.920397] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:20.914 [2024-11-03 04:49:43.920407] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920414] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:20.914 [2024-11-03 04:49:43.920427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:20.914 [2024-11-03 04:49:43.920433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:20.914 [2024-11-03 04:49:43.920439] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:20.914 [2024-11-03 04:49:43.920447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.920453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:20.914 [2024-11-03 04:49:43.920460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:29:20.914 [2024-11-03 04:49:43.920472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.920537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.914 [2024-11-03 04:49:43.920543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:20.914 [2024-11-03 04:49:43.920550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:20.914 [2024-11-03 04:49:43.920575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.914 [2024-11-03 04:49:43.920650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:20.914 [2024-11-03 04:49:43.920659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:20.914 [2024-11-03 04:49:43.920666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:20.914 [2024-11-03 04:49:43.920684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:20.914 [2024-11-03 04:49:43.920702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.914 [2024-11-03 04:49:43.920714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:20.914 [2024-11-03 04:49:43.920719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:20.914 [2024-11-03 04:49:43.920725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.914 [2024-11-03 04:49:43.920730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:20.914 [2024-11-03 04:49:43.920737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:20.914 [2024-11-03 04:49:43.920742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:20.914 [2024-11-03 04:49:43.920755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:20.914 [2024-11-03 04:49:43.920773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:20.914 [2024-11-03 04:49:43.920789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:20.914 [2024-11-03 04:49:43.920806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:20.914 [2024-11-03 04:49:43.920823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:20.914 [2024-11-03 04:49:43.920829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.914 [2024-11-03 04:49:43.920834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:20.914 [2024-11-03 04:49:43.920841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:20.915 [2024-11-03 04:49:43.920846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.915 [2024-11-03 04:49:43.920852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:20.915 [2024-11-03 04:49:43.920857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:20.915 [2024-11-03 04:49:43.920863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.915 [2024-11-03 04:49:43.920868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:20.915 [2024-11-03 04:49:43.920874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:20.915 [2024-11-03 04:49:43.920879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.915 [2024-11-03 04:49:43.920885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:20.915 [2024-11-03 04:49:43.920891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:20.915 [2024-11-03 04:49:43.920897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.915 [2024-11-03 04:49:43.920901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:20.915 [2024-11-03 04:49:43.920909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:20.915 [2024-11-03 04:49:43.920914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.915 [2024-11-03 04:49:43.920922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.915 [2024-11-03 04:49:43.920928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:20.915 [2024-11-03 04:49:43.920936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:20.915 [2024-11-03 04:49:43.920941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:20.915 [2024-11-03 04:49:43.920948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:20.915 [2024-11-03 04:49:43.920952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:20.915 [2024-11-03 04:49:43.920959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:20.915 [2024-11-03 04:49:43.920967] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:20.915 [2024-11-03 04:49:43.920976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.920982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:20.915 [2024-11-03 04:49:43.920989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:20.915 [2024-11-03 04:49:43.920994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:20.915 [2024-11-03 04:49:43.921001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:20.915 [2024-11-03 04:49:43.921006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:20.915 [2024-11-03 04:49:43.921013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:20.915 [2024-11-03 04:49:43.921019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:20.915 [2024-11-03 04:49:43.921025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:20.915 [2024-11-03 04:49:43.921031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:20.915 [2024-11-03 04:49:43.921039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.921044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.921051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.921056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.921063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:20.915 [2024-11-03 04:49:43.921068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:20.915 [2024-11-03 04:49:43.921077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.921085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:20.915 [2024-11-03 04:49:43.921092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:20.915 [2024-11-03 04:49:43.921099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:20.915 [2024-11-03 04:49:43.921106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:20.915 [2024-11-03 04:49:43.921111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.915 [2024-11-03 04:49:43.921118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:20.915 [2024-11-03 04:49:43.921124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:29:20.915 [2024-11-03 04:49:43.921131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.915 [2024-11-03 04:49:43.921172] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:20.915 [2024-11-03 04:49:43.921182] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:24.234 [2024-11-03 04:49:46.876626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.876696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:24.234 [2024-11-03 04:49:46.876712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2955.442 ms 00:29:24.234 [2024-11-03 04:49:46.876722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.903710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.903758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.234 [2024-11-03 04:49:46.903771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.775 ms 00:29:24.234 [2024-11-03 04:49:46.903781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.903909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.903921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:24.234 [2024-11-03 04:49:46.903930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:24.234 [2024-11-03 04:49:46.903942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.936305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.936351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.234 [2024-11-03 04:49:46.936363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.329 ms 00:29:24.234 [2024-11-03 04:49:46.936374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.936405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.936416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.234 [2024-11-03 04:49:46.936424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:24.234 [2024-11-03 04:49:46.936437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.936963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.936987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.234 [2024-11-03 04:49:46.936997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:29:24.234 [2024-11-03 04:49:46.937007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.937117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.937136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.234 [2024-11-03 04:49:46.937145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:24.234 [2024-11-03 04:49:46.937158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.953605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.953651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.234 [2024-11-03 04:49:46.953662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.425 ms 00:29:24.234 [2024-11-03 04:49:46.953676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:46.966538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:24.234 [2024-11-03 04:49:46.970467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:46.970517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:24.234 [2024-11-03 04:49:46.970532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.685 ms 00:29:24.234 [2024-11-03 04:49:46.970540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.074029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.074119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:24.234 [2024-11-03 04:49:47.074141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.431 ms 00:29:24.234 [2024-11-03 04:49:47.074150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.074368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.074381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:24.234 [2024-11-03 04:49:47.074397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:29:24.234 [2024-11-03 04:49:47.074409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.100983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.101036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:24.234 [2024-11-03 04:49:47.101055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.513 ms 00:29:24.234 [2024-11-03 04:49:47.101064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.126502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.126549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:24.234 [2024-11-03 04:49:47.126582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.379 ms 00:29:24.234 [2024-11-03 04:49:47.126590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.127216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.127246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:24.234 [2024-11-03 04:49:47.127257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:29:24.234 [2024-11-03 04:49:47.127266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.216116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.216177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:24.234 [2024-11-03 04:49:47.216201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.800 ms 00:29:24.234 [2024-11-03 04:49:47.216211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.243089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.243293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:24.234 [2024-11-03 04:49:47.243327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.775 ms 00:29:24.234 [2024-11-03 04:49:47.243336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.234 [2024-11-03 04:49:47.268640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.234 [2024-11-03 04:49:47.268688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:24.234 [2024-11-03 04:49:47.268704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.254 ms 00:29:24.234 [2024-11-03 04:49:47.268712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.235 [2024-11-03 04:49:47.295039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.235 [2024-11-03 04:49:47.295087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:24.235 [2024-11-03 04:49:47.295103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.272 ms 00:29:24.235 [2024-11-03 04:49:47.295111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.235 [2024-11-03 04:49:47.295168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.235 [2024-11-03 04:49:47.295178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:24.235 [2024-11-03 04:49:47.295193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:24.235 [2024-11-03 04:49:47.295202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.235 [2024-11-03 04:49:47.295299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.235 [2024-11-03 04:49:47.295310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:24.235 [2024-11-03 04:49:47.295322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:24.235 [2024-11-03 04:49:47.295330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.235 [2024-11-03 04:49:47.296478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3385.523 ms, result 0 00:29:24.235 { 00:29:24.235 "name": "ftl0", 00:29:24.235 "uuid": "aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3" 00:29:24.235 } 00:29:24.497 04:49:47 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:24.497 04:49:47 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:24.497 04:49:47 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:24.497 04:49:47 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:24.758 [2024-11-03 04:49:47.751853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.753995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:24.758 [2024-11-03 04:49:47.754025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:24.758 [2024-11-03 04:49:47.754047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.754091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:24.758 [2024-11-03 04:49:47.757161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.757204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:24.758 [2024-11-03 04:49:47.757219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.044 ms 00:29:24.758 [2024-11-03 04:49:47.757228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.757539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.757552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:24.758 [2024-11-03 04:49:47.757591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:29:24.758 [2024-11-03 04:49:47.757603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.760856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.760878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:24.758 [2024-11-03 04:49:47.760890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:29:24.758 [2024-11-03 04:49:47.760899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.767215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.767254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:24.758 [2024-11-03 04:49:47.767269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:29:24.758 [2024-11-03 04:49:47.767277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.793142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.793190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:24.758 [2024-11-03 04:49:47.793206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.766 ms 00:29:24.758 [2024-11-03 04:49:47.793214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.810522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.810586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:24.758 [2024-11-03 04:49:47.810603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.250 ms 00:29:24.758 [2024-11-03 04:49:47.810611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.810783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.810796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:24.758 [2024-11-03 04:49:47.810808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:29:24.758 [2024-11-03 04:49:47.810816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.758 [2024-11-03 04:49:47.836057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.758 [2024-11-03 04:49:47.836102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:24.758 [2024-11-03 04:49:47.836117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.216 ms 00:29:24.758 [2024-11-03 04:49:47.836124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.021 [2024-11-03 04:49:47.861107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.021 [2024-11-03 04:49:47.861150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:25.021 [2024-11-03 04:49:47.861165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.928 ms 00:29:25.021 [2024-11-03 04:49:47.861173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.021 [2024-11-03 04:49:47.885342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.021 [2024-11-03 04:49:47.885525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:25.021 [2024-11-03 04:49:47.885549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:29:25.021 [2024-11-03 04:49:47.885573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.021 [2024-11-03 04:49:47.909577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.021 [2024-11-03 04:49:47.909622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:25.021 [2024-11-03 04:49:47.909636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.908 ms 00:29:25.021 [2024-11-03 04:49:47.909644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.021 [2024-11-03 04:49:47.909692] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:25.021 [2024-11-03 04:49:47.909708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.909998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:25.021 [2024-11-03 04:49:47.910006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:25.022 [2024-11-03 04:49:47.910637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:25.022 [2024-11-03 04:49:47.910647] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3 00:29:25.022 [2024-11-03 04:49:47.910657] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:25.022 [2024-11-03 04:49:47.910672] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:25.022 [2024-11-03 04:49:47.910679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:25.022 [2024-11-03 04:49:47.910690] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:25.022 [2024-11-03 04:49:47.910701] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:25.022 [2024-11-03 04:49:47.910711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:25.022 [2024-11-03 04:49:47.910718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:25.022 [2024-11-03 04:49:47.910727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:25.022 [2024-11-03 04:49:47.910735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:25.022 [2024-11-03 04:49:47.910746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.022 [2024-11-03 04:49:47.910755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:25.022 [2024-11-03 04:49:47.910766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:29:25.022 [2024-11-03 04:49:47.910774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.022 [2024-11-03 04:49:47.924602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.022 [2024-11-03 04:49:47.924642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:25.022 [2024-11-03 04:49:47.924656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.782 ms 00:29:25.023 [2024-11-03 04:49:47.924664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.023 [2024-11-03 04:49:47.925068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.023 [2024-11-03 04:49:47.925086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:25.023 [2024-11-03 04:49:47.925098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:29:25.023 [2024-11-03 04:49:47.925106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.023 [2024-11-03 04:49:47.971284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.023 [2024-11-03 04:49:47.971331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:25.023 [2024-11-03 04:49:47.971345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.023 [2024-11-03 04:49:47.971354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.023 [2024-11-03 04:49:47.971424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.023 [2024-11-03 04:49:47.971433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:25.023 [2024-11-03 04:49:47.971444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.023 [2024-11-03 04:49:47.971452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.023 [2024-11-03 04:49:47.971549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.023 [2024-11-03 04:49:47.971586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:25.023 [2024-11-03 04:49:47.971597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.023 [2024-11-03 04:49:47.971604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.023 [2024-11-03 04:49:47.971628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.023 [2024-11-03 04:49:47.971636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:25.023 [2024-11-03 04:49:47.971645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.023 [2024-11-03 04:49:47.971653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.023 [2024-11-03 04:49:48.055293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.023 [2024-11-03 04:49:48.055543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:25.023 [2024-11-03 04:49:48.055603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.023 [2024-11-03 04:49:48.055613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.284 [2024-11-03 04:49:48.124285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.284 [2024-11-03 04:49:48.124342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:25.284 [2024-11-03 04:49:48.124357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.284 [2024-11-03 04:49:48.124366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.284 [2024-11-03 04:49:48.124460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.284 [2024-11-03 04:49:48.124475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:25.284 [2024-11-03 04:49:48.124486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.284 [2024-11-03 04:49:48.124494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.285 [2024-11-03 04:49:48.124618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.285 [2024-11-03 04:49:48.124630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:25.285 [2024-11-03 04:49:48.124642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.285 [2024-11-03 04:49:48.124650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.285 [2024-11-03 04:49:48.124762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.285 [2024-11-03 04:49:48.124773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:25.285 [2024-11-03 04:49:48.124786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.285 [2024-11-03 04:49:48.124794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.285 [2024-11-03 04:49:48.124832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.285 [2024-11-03 04:49:48.124842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:25.285 [2024-11-03 04:49:48.124853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.285 [2024-11-03 04:49:48.124860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.285 [2024-11-03 04:49:48.124905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.285 [2024-11-03 04:49:48.124914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:25.285 [2024-11-03 04:49:48.124928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.285 [2024-11-03 04:49:48.124936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.285 [2024-11-03 04:49:48.124989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.285 [2024-11-03 04:49:48.125000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:25.285 [2024-11-03 04:49:48.125011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.285 [2024-11-03 04:49:48.125019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.285 [2024-11-03 04:49:48.125167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.274 ms, result 0 00:29:25.285 true 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81993 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 81993 ']' 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 81993 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # uname 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81993 00:29:25.285 killing process with pid 81993 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81993' 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@971 -- # kill 81993 00:29:25.285 04:49:48 ftl.ftl_restore_fast -- common/autotest_common.sh@976 -- # wait 81993 00:29:31.896 04:49:54 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:36.098 262144+0 records in 00:29:36.098 262144+0 records out 00:29:36.098 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.92426 s, 274 MB/s 00:29:36.098 04:49:58 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:38.024 04:50:00 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:38.024 [2024-11-03 04:50:00.660324] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:29:38.024 [2024-11-03 04:50:00.660416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82214 ] 00:29:38.024 [2024-11-03 04:50:00.814845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.024 [2024-11-03 04:50:00.925877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.285 [2024-11-03 04:50:01.215259] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:38.285 [2024-11-03 04:50:01.215337] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:38.547 [2024-11-03 04:50:01.377388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.547 [2024-11-03 04:50:01.377451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:38.547 [2024-11-03 04:50:01.377471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:38.547 [2024-11-03 04:50:01.377481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.547 [2024-11-03 04:50:01.377540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.547 [2024-11-03 04:50:01.377552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:38.547 [2024-11-03 04:50:01.377585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:38.547 [2024-11-03 04:50:01.377594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.547 [2024-11-03 04:50:01.377615] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:38.548 [2024-11-03 04:50:01.378315] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:38.548 [2024-11-03 04:50:01.378336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.378344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:38.548 [2024-11-03 04:50:01.378354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:29:38.548 [2024-11-03 04:50:01.378362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.380018] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:38.548 [2024-11-03 04:50:01.394241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.394292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:38.548 [2024-11-03 04:50:01.394307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.224 ms 00:29:38.548 [2024-11-03 04:50:01.394315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.394394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.394408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:38.548 [2024-11-03 04:50:01.394418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:38.548 [2024-11-03 04:50:01.394426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.402353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.402572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:38.548 [2024-11-03 04:50:01.402591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.854 ms 00:29:38.548 [2024-11-03 04:50:01.402599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.402690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.402700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:38.548 [2024-11-03 04:50:01.402709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:38.548 [2024-11-03 04:50:01.402717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.402760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.402770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:38.548 [2024-11-03 04:50:01.402779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:38.548 [2024-11-03 04:50:01.402787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.402811] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:38.548 [2024-11-03 04:50:01.407006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.407045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:38.548 [2024-11-03 04:50:01.407055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.202 ms 00:29:38.548 [2024-11-03 04:50:01.407067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.407102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.407110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:38.548 [2024-11-03 04:50:01.407119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:38.548 [2024-11-03 04:50:01.407127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.407178] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:38.548 [2024-11-03 04:50:01.407202] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:38.548 [2024-11-03 04:50:01.407243] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:38.548 [2024-11-03 04:50:01.407263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:38.548 [2024-11-03 04:50:01.407368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:38.548 [2024-11-03 04:50:01.407380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:38.548 [2024-11-03 04:50:01.407392] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:38.548 [2024-11-03 04:50:01.407403] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407413] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:38.548 [2024-11-03 04:50:01.407429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:38.548 [2024-11-03 04:50:01.407438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:38.548 [2024-11-03 04:50:01.407446] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:38.548 [2024-11-03 04:50:01.407456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.407464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:38.548 [2024-11-03 04:50:01.407472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:29:38.548 [2024-11-03 04:50:01.407480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.407582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.548 [2024-11-03 04:50:01.407591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:38.548 [2024-11-03 04:50:01.407600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:38.548 [2024-11-03 04:50:01.407608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.548 [2024-11-03 04:50:01.407712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:38.548 [2024-11-03 04:50:01.407729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:38.548 [2024-11-03 04:50:01.407738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:38.548 [2024-11-03 04:50:01.407762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:38.548 [2024-11-03 04:50:01.407784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:38.548 [2024-11-03 04:50:01.407799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:38.548 [2024-11-03 04:50:01.407807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:38.548 [2024-11-03 04:50:01.407814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:38.548 [2024-11-03 04:50:01.407821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:38.548 [2024-11-03 04:50:01.407830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:38.548 [2024-11-03 04:50:01.407845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:38.548 [2024-11-03 04:50:01.407861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:38.548 [2024-11-03 04:50:01.407883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:38.548 [2024-11-03 04:50:01.407905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:38.548 [2024-11-03 04:50:01.407926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:38.548 [2024-11-03 04:50:01.407945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.548 [2024-11-03 04:50:01.407959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:38.548 [2024-11-03 04:50:01.407966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:38.548 [2024-11-03 04:50:01.407972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:38.548 [2024-11-03 04:50:01.407979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:38.548 [2024-11-03 04:50:01.407986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:38.548 [2024-11-03 04:50:01.407993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:38.548 [2024-11-03 04:50:01.407999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:38.548 [2024-11-03 04:50:01.408007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:38.548 [2024-11-03 04:50:01.408013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.408020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:38.548 [2024-11-03 04:50:01.408026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:38.548 [2024-11-03 04:50:01.408033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.408039] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:38.548 [2024-11-03 04:50:01.408046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:38.548 [2024-11-03 04:50:01.408053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:38.548 [2024-11-03 04:50:01.408064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.548 [2024-11-03 04:50:01.408074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:38.548 [2024-11-03 04:50:01.408081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:38.548 [2024-11-03 04:50:01.408088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:38.548 [2024-11-03 04:50:01.408094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:38.548 [2024-11-03 04:50:01.408100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:38.548 [2024-11-03 04:50:01.408107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:38.548 [2024-11-03 04:50:01.408116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:38.548 [2024-11-03 04:50:01.408125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:38.549 [2024-11-03 04:50:01.408141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:38.549 [2024-11-03 04:50:01.408148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:38.549 [2024-11-03 04:50:01.408156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:38.549 [2024-11-03 04:50:01.408163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:38.549 [2024-11-03 04:50:01.408170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:38.549 [2024-11-03 04:50:01.408177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:38.549 [2024-11-03 04:50:01.408184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:38.549 [2024-11-03 04:50:01.408191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:38.549 [2024-11-03 04:50:01.408198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:38.549 [2024-11-03 04:50:01.408232] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:38.549 [2024-11-03 04:50:01.408241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:38.549 [2024-11-03 04:50:01.408259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:38.549 [2024-11-03 04:50:01.408267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:38.549 [2024-11-03 04:50:01.408275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:38.549 [2024-11-03 04:50:01.408284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.408291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:38.549 [2024-11-03 04:50:01.408300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:29:38.549 [2024-11-03 04:50:01.408313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.439939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.439994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:38.549 [2024-11-03 04:50:01.440006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.579 ms 00:29:38.549 [2024-11-03 04:50:01.440015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.440106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.440120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:38.549 [2024-11-03 04:50:01.440129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:38.549 [2024-11-03 04:50:01.440138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.484721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.484939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:38.549 [2024-11-03 04:50:01.484962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.525 ms 00:29:38.549 [2024-11-03 04:50:01.484972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.485020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.485030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:38.549 [2024-11-03 04:50:01.485040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:38.549 [2024-11-03 04:50:01.485054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.485640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.485664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:38.549 [2024-11-03 04:50:01.485676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:29:38.549 [2024-11-03 04:50:01.485685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.485832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.485844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:38.549 [2024-11-03 04:50:01.485852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:29:38.549 [2024-11-03 04:50:01.485861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.501809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.501852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:38.549 [2024-11-03 04:50:01.501863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.923 ms 00:29:38.549 [2024-11-03 04:50:01.501875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.516191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:38.549 [2024-11-03 04:50:01.516380] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:38.549 [2024-11-03 04:50:01.516401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.516409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:38.549 [2024-11-03 04:50:01.516419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.416 ms 00:29:38.549 [2024-11-03 04:50:01.516428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.541572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.541620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:38.549 [2024-11-03 04:50:01.541639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.027 ms 00:29:38.549 [2024-11-03 04:50:01.541647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.554369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.554423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:38.549 [2024-11-03 04:50:01.554434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.671 ms 00:29:38.549 [2024-11-03 04:50:01.554442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.566661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.566706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:38.549 [2024-11-03 04:50:01.566717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.175 ms 00:29:38.549 [2024-11-03 04:50:01.566724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.549 [2024-11-03 04:50:01.567361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.549 [2024-11-03 04:50:01.567386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:38.549 [2024-11-03 04:50:01.567397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:29:38.549 [2024-11-03 04:50:01.567405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.630988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.631050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:38.809 [2024-11-03 04:50:01.631067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.561 ms 00:29:38.809 [2024-11-03 04:50:01.631076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.642792] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:38.809 [2024-11-03 04:50:01.645743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.645916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:38.809 [2024-11-03 04:50:01.645936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.607 ms 00:29:38.809 [2024-11-03 04:50:01.645945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.646033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.646044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:38.809 [2024-11-03 04:50:01.646054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:38.809 [2024-11-03 04:50:01.646063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.646132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.646147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:38.809 [2024-11-03 04:50:01.646156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:38.809 [2024-11-03 04:50:01.646165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.646187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.646196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:38.809 [2024-11-03 04:50:01.646205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:38.809 [2024-11-03 04:50:01.646213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.646250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:38.809 [2024-11-03 04:50:01.646266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.646275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:38.809 [2024-11-03 04:50:01.646287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:38.809 [2024-11-03 04:50:01.646295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.671453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.671501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:38.809 [2024-11-03 04:50:01.671514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.139 ms 00:29:38.809 [2024-11-03 04:50:01.671523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.671630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.809 [2024-11-03 04:50:01.671642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:38.809 [2024-11-03 04:50:01.671652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:38.809 [2024-11-03 04:50:01.671660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.809 [2024-11-03 04:50:01.672918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 294.998 ms, result 0 00:29:39.747  [2024-11-03T04:50:03.773Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-03T04:50:04.712Z] Copying: 39/1024 [MB] (16 MBps) [2024-11-03T04:50:06.096Z] Copying: 57/1024 [MB] (17 MBps) [2024-11-03T04:50:07.037Z] Copying: 75/1024 [MB] (18 MBps) [2024-11-03T04:50:07.980Z] Copying: 91/1024 [MB] (15 MBps) [2024-11-03T04:50:08.921Z] Copying: 109/1024 [MB] (18 MBps) [2024-11-03T04:50:09.873Z] Copying: 122/1024 [MB] (12 MBps) [2024-11-03T04:50:10.818Z] Copying: 135/1024 [MB] (12 MBps) [2024-11-03T04:50:11.765Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-03T04:50:12.768Z] Copying: 156/1024 [MB] (10 MBps) [2024-11-03T04:50:13.712Z] Copying: 170/1024 [MB] (13 MBps) [2024-11-03T04:50:15.099Z] Copying: 184/1024 [MB] (14 MBps) [2024-11-03T04:50:16.044Z] Copying: 200/1024 [MB] (15 MBps) [2024-11-03T04:50:16.991Z] Copying: 218/1024 [MB] (18 MBps) [2024-11-03T04:50:17.936Z] Copying: 232/1024 [MB] (14 MBps) [2024-11-03T04:50:18.880Z] Copying: 247/1024 [MB] (14 MBps) [2024-11-03T04:50:19.821Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-03T04:50:20.762Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-03T04:50:21.695Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-03T04:50:23.068Z] Copying: 306/1024 [MB] (29 MBps) [2024-11-03T04:50:24.002Z] Copying: 344/1024 [MB] (38 MBps) [2024-11-03T04:50:24.935Z] Copying: 375/1024 [MB] (31 MBps) [2024-11-03T04:50:25.869Z] Copying: 406/1024 [MB] (30 MBps) [2024-11-03T04:50:26.801Z] Copying: 432/1024 [MB] (26 MBps) [2024-11-03T04:50:27.734Z] Copying: 455/1024 [MB] (22 MBps) [2024-11-03T04:50:29.109Z] Copying: 481/1024 [MB] (25 MBps) [2024-11-03T04:50:29.731Z] Copying: 521/1024 [MB] (39 MBps) [2024-11-03T04:50:31.103Z] Copying: 542/1024 [MB] (21 MBps) [2024-11-03T04:50:32.041Z] Copying: 561/1024 [MB] (19 MBps) [2024-11-03T04:50:32.983Z] Copying: 580/1024 [MB] (18 MBps) [2024-11-03T04:50:33.920Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-03T04:50:34.862Z] Copying: 608/1024 [MB] (17 MBps) [2024-11-03T04:50:35.797Z] Copying: 620/1024 [MB] (12 MBps) [2024-11-03T04:50:36.729Z] Copying: 636/1024 [MB] (16 MBps) [2024-11-03T04:50:38.101Z] Copying: 657/1024 [MB] (20 MBps) [2024-11-03T04:50:39.034Z] Copying: 677/1024 [MB] (19 MBps) [2024-11-03T04:50:39.968Z] Copying: 696/1024 [MB] (19 MBps) [2024-11-03T04:50:40.899Z] Copying: 717/1024 [MB] (20 MBps) [2024-11-03T04:50:41.832Z] Copying: 736/1024 [MB] (19 MBps) [2024-11-03T04:50:42.764Z] Copying: 755/1024 [MB] (19 MBps) [2024-11-03T04:50:43.697Z] Copying: 774/1024 [MB] (19 MBps) [2024-11-03T04:50:45.078Z] Copying: 801/1024 [MB] (26 MBps) [2024-11-03T04:50:46.013Z] Copying: 813/1024 [MB] (12 MBps) [2024-11-03T04:50:46.958Z] Copying: 825/1024 [MB] (11 MBps) [2024-11-03T04:50:47.907Z] Copying: 854/1024 [MB] (29 MBps) [2024-11-03T04:50:48.848Z] Copying: 881/1024 [MB] (26 MBps) [2024-11-03T04:50:49.784Z] Copying: 895/1024 [MB] (13 MBps) [2024-11-03T04:50:50.719Z] Copying: 912/1024 [MB] (17 MBps) [2024-11-03T04:50:52.092Z] Copying: 941/1024 [MB] (28 MBps) [2024-11-03T04:50:53.026Z] Copying: 969/1024 [MB] (28 MBps) [2024-11-03T04:50:53.594Z] Copying: 1003/1024 [MB] (34 MBps) [2024-11-03T04:50:53.594Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-03 04:50:53.393852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.510 [2024-11-03 04:50:53.393885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:30.510 [2024-11-03 04:50:53.393896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:30.510 [2024-11-03 04:50:53.393903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.510 [2024-11-03 04:50:53.393918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:30.510 [2024-11-03 04:50:53.395998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.510 [2024-11-03 04:50:53.396023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:30.510 [2024-11-03 04:50:53.396031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:30:30.510 [2024-11-03 04:50:53.396038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.510 [2024-11-03 04:50:53.397394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.510 [2024-11-03 04:50:53.397420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:30.510 [2024-11-03 04:50:53.397428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:30:30.510 [2024-11-03 04:50:53.397434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.510 [2024-11-03 04:50:53.397453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.510 [2024-11-03 04:50:53.397460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:30.510 [2024-11-03 04:50:53.397466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:30.510 [2024-11-03 04:50:53.397472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.510 [2024-11-03 04:50:53.397507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.510 [2024-11-03 04:50:53.397514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:30.510 [2024-11-03 04:50:53.397521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:30.510 [2024-11-03 04:50:53.397527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.510 [2024-11-03 04:50:53.397537] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:30.510 [2024-11-03 04:50:53.397546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:30.510 [2024-11-03 04:50:53.397774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.397994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:30.511 [2024-11-03 04:50:53.398151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:30.511 [2024-11-03 04:50:53.398156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3 00:30:30.511 [2024-11-03 04:50:53.398162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:30.511 [2024-11-03 04:50:53.398169] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:30.511 [2024-11-03 04:50:53.398175] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:30.511 [2024-11-03 04:50:53.398180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:30.511 [2024-11-03 04:50:53.398186] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:30.511 [2024-11-03 04:50:53.398193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:30.511 [2024-11-03 04:50:53.398199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:30.511 [2024-11-03 04:50:53.398204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:30.511 [2024-11-03 04:50:53.398209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:30.511 [2024-11-03 04:50:53.398214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.511 [2024-11-03 04:50:53.398219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:30.511 [2024-11-03 04:50:53.398225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:30:30.511 [2024-11-03 04:50:53.398231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.511 [2024-11-03 04:50:53.407942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.511 [2024-11-03 04:50:53.408057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:30.511 [2024-11-03 04:50:53.408073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.700 ms 00:30:30.511 [2024-11-03 04:50:53.408079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.511 [2024-11-03 04:50:53.408342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:30.511 [2024-11-03 04:50:53.408360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:30.511 [2024-11-03 04:50:53.408366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:30:30.511 [2024-11-03 04:50:53.408372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.511 [2024-11-03 04:50:53.434015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.511 [2024-11-03 04:50:53.434039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:30.511 [2024-11-03 04:50:53.434050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.511 [2024-11-03 04:50:53.434055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.511 [2024-11-03 04:50:53.434101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.511 [2024-11-03 04:50:53.434108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:30.511 [2024-11-03 04:50:53.434114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.511 [2024-11-03 04:50:53.434120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.511 [2024-11-03 04:50:53.434151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.511 [2024-11-03 04:50:53.434158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:30.512 [2024-11-03 04:50:53.434164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.434172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.434183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.434189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:30.512 [2024-11-03 04:50:53.434195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.434201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.492376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.492406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:30.512 [2024-11-03 04:50:53.492418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.492424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:30.512 [2024-11-03 04:50:53.540314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:30.512 [2024-11-03 04:50:53.540371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:30.512 [2024-11-03 04:50:53.540432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:30.512 [2024-11-03 04:50:53.540506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:30.512 [2024-11-03 04:50:53.540582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:30.512 [2024-11-03 04:50:53.540625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:30.512 [2024-11-03 04:50:53.540671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:30.512 [2024-11-03 04:50:53.540677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:30.512 [2024-11-03 04:50:53.540683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:30.512 [2024-11-03 04:50:53.540770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 146.894 ms, result 0 00:30:31.452 00:30:31.452 00:30:31.452 04:50:54 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:31.452 [2024-11-03 04:50:54.534070] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:30:31.711 [2024-11-03 04:50:54.534346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82760 ] 00:30:31.711 [2024-11-03 04:50:54.690547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.711 [2024-11-03 04:50:54.770358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.971 [2024-11-03 04:50:54.974203] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:31.971 [2024-11-03 04:50:54.974248] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:32.237 [2024-11-03 04:50:55.131950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.132131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:32.237 [2024-11-03 04:50:55.132157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:32.237 [2024-11-03 04:50:55.132166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.132227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.132237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:32.237 [2024-11-03 04:50:55.132248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:32.237 [2024-11-03 04:50:55.132256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.132276] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:32.237 [2024-11-03 04:50:55.133014] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:32.237 [2024-11-03 04:50:55.133036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.133044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:32.237 [2024-11-03 04:50:55.133053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:30:32.237 [2024-11-03 04:50:55.133061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.133353] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:32.237 [2024-11-03 04:50:55.133387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.133396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:32.237 [2024-11-03 04:50:55.133408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:32.237 [2024-11-03 04:50:55.133415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.133457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.133468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:32.237 [2024-11-03 04:50:55.133476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:32.237 [2024-11-03 04:50:55.133483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.133755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.133768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:32.237 [2024-11-03 04:50:55.133779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:30:32.237 [2024-11-03 04:50:55.133786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.133850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.133860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:32.237 [2024-11-03 04:50:55.133868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:32.237 [2024-11-03 04:50:55.133875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.133897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.133907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:32.237 [2024-11-03 04:50:55.133915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:32.237 [2024-11-03 04:50:55.133925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.133942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:32.237 [2024-11-03 04:50:55.137744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.137777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:32.237 [2024-11-03 04:50:55.137786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.805 ms 00:30:32.237 [2024-11-03 04:50:55.137794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.137827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.237 [2024-11-03 04:50:55.137835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:32.237 [2024-11-03 04:50:55.137843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:32.237 [2024-11-03 04:50:55.137850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.237 [2024-11-03 04:50:55.137897] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:32.237 [2024-11-03 04:50:55.137917] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:32.237 [2024-11-03 04:50:55.137953] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:32.237 [2024-11-03 04:50:55.137969] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:32.237 [2024-11-03 04:50:55.138070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:32.237 [2024-11-03 04:50:55.138082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:32.238 [2024-11-03 04:50:55.138092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:32.238 [2024-11-03 04:50:55.138102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138120] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:32.238 [2024-11-03 04:50:55.138128] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:32.238 [2024-11-03 04:50:55.138139] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:32.238 [2024-11-03 04:50:55.138146] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:32.238 [2024-11-03 04:50:55.138154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.238 [2024-11-03 04:50:55.138162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:32.238 [2024-11-03 04:50:55.138171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:30:32.238 [2024-11-03 04:50:55.138178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.238 [2024-11-03 04:50:55.138260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.238 [2024-11-03 04:50:55.138269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:32.238 [2024-11-03 04:50:55.138278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:32.238 [2024-11-03 04:50:55.138285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.238 [2024-11-03 04:50:55.138389] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:32.238 [2024-11-03 04:50:55.138401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:32.238 [2024-11-03 04:50:55.138409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:32.238 [2024-11-03 04:50:55.138432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:32.238 [2024-11-03 04:50:55.138455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:32.238 [2024-11-03 04:50:55.138469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:32.238 [2024-11-03 04:50:55.138477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:32.238 [2024-11-03 04:50:55.138483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:32.238 [2024-11-03 04:50:55.138490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:32.238 [2024-11-03 04:50:55.138497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:32.238 [2024-11-03 04:50:55.138503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:32.238 [2024-11-03 04:50:55.138524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:32.238 [2024-11-03 04:50:55.138546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:32.238 [2024-11-03 04:50:55.138585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:32.238 [2024-11-03 04:50:55.138606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:32.238 [2024-11-03 04:50:55.138627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:32.238 [2024-11-03 04:50:55.138646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:32.238 [2024-11-03 04:50:55.138661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:32.238 [2024-11-03 04:50:55.138667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:32.238 [2024-11-03 04:50:55.138673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:32.238 [2024-11-03 04:50:55.138681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:32.238 [2024-11-03 04:50:55.138688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:32.238 [2024-11-03 04:50:55.138695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:32.238 [2024-11-03 04:50:55.138708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:32.238 [2024-11-03 04:50:55.138715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:32.238 [2024-11-03 04:50:55.138730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:32.238 [2024-11-03 04:50:55.138737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.238 [2024-11-03 04:50:55.138752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:32.238 [2024-11-03 04:50:55.138758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:32.238 [2024-11-03 04:50:55.138765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:32.238 [2024-11-03 04:50:55.138774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:32.238 [2024-11-03 04:50:55.138780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:32.238 [2024-11-03 04:50:55.138787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:32.238 [2024-11-03 04:50:55.138795] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:32.238 [2024-11-03 04:50:55.138804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:32.238 [2024-11-03 04:50:55.138823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:32.238 [2024-11-03 04:50:55.138830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:32.238 [2024-11-03 04:50:55.138837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:32.238 [2024-11-03 04:50:55.138844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:32.238 [2024-11-03 04:50:55.138851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:32.238 [2024-11-03 04:50:55.138859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:32.238 [2024-11-03 04:50:55.138866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:32.238 [2024-11-03 04:50:55.138873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:32.238 [2024-11-03 04:50:55.138880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:32.238 [2024-11-03 04:50:55.138916] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:32.238 [2024-11-03 04:50:55.138924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:32.238 [2024-11-03 04:50:55.138939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:32.238 [2024-11-03 04:50:55.138947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:32.238 [2024-11-03 04:50:55.138953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:32.238 [2024-11-03 04:50:55.138960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.238 [2024-11-03 04:50:55.138968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:32.238 [2024-11-03 04:50:55.138975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:30:32.238 [2024-11-03 04:50:55.138982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.238 [2024-11-03 04:50:55.164278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.238 [2024-11-03 04:50:55.164313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:32.238 [2024-11-03 04:50:55.164323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.257 ms 00:30:32.238 [2024-11-03 04:50:55.164331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.238 [2024-11-03 04:50:55.164416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.238 [2024-11-03 04:50:55.164424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:32.239 [2024-11-03 04:50:55.164432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:32.239 [2024-11-03 04:50:55.164443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.214304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.214357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:32.239 [2024-11-03 04:50:55.214371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.813 ms 00:30:32.239 [2024-11-03 04:50:55.214380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.214427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.214441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:32.239 [2024-11-03 04:50:55.214450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:32.239 [2024-11-03 04:50:55.214459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.214593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.214607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:32.239 [2024-11-03 04:50:55.214616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:30:32.239 [2024-11-03 04:50:55.214624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.214754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.214767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:32.239 [2024-11-03 04:50:55.214780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:32.239 [2024-11-03 04:50:55.214789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.230612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.230658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:32.239 [2024-11-03 04:50:55.230670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.803 ms 00:30:32.239 [2024-11-03 04:50:55.230679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.230829] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:32.239 [2024-11-03 04:50:55.230842] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:32.239 [2024-11-03 04:50:55.230853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.230861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:32.239 [2024-11-03 04:50:55.230874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:32.239 [2024-11-03 04:50:55.230882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.243183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.243374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:32.239 [2024-11-03 04:50:55.243395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:30:32.239 [2024-11-03 04:50:55.243404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.243534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.243543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:32.239 [2024-11-03 04:50:55.243552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:32.239 [2024-11-03 04:50:55.243580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.243638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.243651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:32.239 [2024-11-03 04:50:55.243660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:32.239 [2024-11-03 04:50:55.243669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.244262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.244277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:32.239 [2024-11-03 04:50:55.244286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:30:32.239 [2024-11-03 04:50:55.244293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.244310] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:32.239 [2024-11-03 04:50:55.244324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.244333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:32.239 [2024-11-03 04:50:55.244342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:32.239 [2024-11-03 04:50:55.244350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.256883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:32.239 [2024-11-03 04:50:55.257044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.257056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:32.239 [2024-11-03 04:50:55.257067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.676 ms 00:30:32.239 [2024-11-03 04:50:55.257077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.259268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.259302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:32.239 [2024-11-03 04:50:55.259317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:30:32.239 [2024-11-03 04:50:55.259325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.259419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.259430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:32.239 [2024-11-03 04:50:55.259441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:32.239 [2024-11-03 04:50:55.259449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.259473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.259483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:32.239 [2024-11-03 04:50:55.259497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:32.239 [2024-11-03 04:50:55.259506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.259535] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:32.239 [2024-11-03 04:50:55.259545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.259553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:32.239 [2024-11-03 04:50:55.259583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:32.239 [2024-11-03 04:50:55.259591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.285950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.286124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:32.239 [2024-11-03 04:50:55.286145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:30:32.239 [2024-11-03 04:50:55.286155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.286235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.239 [2024-11-03 04:50:55.286245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:32.239 [2024-11-03 04:50:55.286254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:32.239 [2024-11-03 04:50:55.286263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.239 [2024-11-03 04:50:55.287388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.990 ms, result 0 00:30:33.627  [2024-11-03T04:50:57.653Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-03T04:50:58.596Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-03T04:50:59.541Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-03T04:51:00.496Z] Copying: 60/1024 [MB] (14 MBps) [2024-11-03T04:51:01.883Z] Copying: 76/1024 [MB] (16 MBps) [2024-11-03T04:51:02.825Z] Copying: 93/1024 [MB] (16 MBps) [2024-11-03T04:51:03.797Z] Copying: 111/1024 [MB] (18 MBps) [2024-11-03T04:51:04.741Z] Copying: 132/1024 [MB] (20 MBps) [2024-11-03T04:51:05.682Z] Copying: 148/1024 [MB] (16 MBps) [2024-11-03T04:51:06.624Z] Copying: 162/1024 [MB] (13 MBps) [2024-11-03T04:51:07.567Z] Copying: 172/1024 [MB] (10 MBps) [2024-11-03T04:51:08.511Z] Copying: 184/1024 [MB] (11 MBps) [2024-11-03T04:51:09.898Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-03T04:51:10.842Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-03T04:51:11.787Z] Copying: 216/1024 [MB] (11 MBps) [2024-11-03T04:51:12.727Z] Copying: 227/1024 [MB] (10 MBps) [2024-11-03T04:51:13.670Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-03T04:51:14.613Z] Copying: 252/1024 [MB] (13 MBps) [2024-11-03T04:51:15.558Z] Copying: 273/1024 [MB] (21 MBps) [2024-11-03T04:51:16.503Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-03T04:51:17.888Z] Copying: 295/1024 [MB] (10 MBps) [2024-11-03T04:51:18.837Z] Copying: 307/1024 [MB] (12 MBps) [2024-11-03T04:51:19.784Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-03T04:51:20.724Z] Copying: 334/1024 [MB] (14 MBps) [2024-11-03T04:51:21.703Z] Copying: 352/1024 [MB] (18 MBps) [2024-11-03T04:51:22.646Z] Copying: 365/1024 [MB] (12 MBps) [2024-11-03T04:51:23.588Z] Copying: 385/1024 [MB] (19 MBps) [2024-11-03T04:51:24.528Z] Copying: 396/1024 [MB] (10 MBps) [2024-11-03T04:51:25.914Z] Copying: 410/1024 [MB] (14 MBps) [2024-11-03T04:51:26.487Z] Copying: 426/1024 [MB] (15 MBps) [2024-11-03T04:51:27.871Z] Copying: 445/1024 [MB] (19 MBps) [2024-11-03T04:51:28.815Z] Copying: 462/1024 [MB] (17 MBps) [2024-11-03T04:51:29.760Z] Copying: 480/1024 [MB] (17 MBps) [2024-11-03T04:51:30.802Z] Copying: 495/1024 [MB] (15 MBps) [2024-11-03T04:51:31.747Z] Copying: 510/1024 [MB] (14 MBps) [2024-11-03T04:51:32.691Z] Copying: 521/1024 [MB] (11 MBps) [2024-11-03T04:51:33.636Z] Copying: 533/1024 [MB] (12 MBps) [2024-11-03T04:51:34.575Z] Copying: 544/1024 [MB] (10 MBps) [2024-11-03T04:51:35.518Z] Copying: 558/1024 [MB] (13 MBps) [2024-11-03T04:51:36.904Z] Copying: 569/1024 [MB] (11 MBps) [2024-11-03T04:51:37.848Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-03T04:51:38.791Z] Copying: 599/1024 [MB] (16 MBps) [2024-11-03T04:51:39.736Z] Copying: 611/1024 [MB] (12 MBps) [2024-11-03T04:51:40.681Z] Copying: 623/1024 [MB] (11 MBps) [2024-11-03T04:51:41.625Z] Copying: 641/1024 [MB] (17 MBps) [2024-11-03T04:51:42.567Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-03T04:51:43.510Z] Copying: 673/1024 [MB] (20 MBps) [2024-11-03T04:51:44.897Z] Copying: 693/1024 [MB] (20 MBps) [2024-11-03T04:51:45.842Z] Copying: 708/1024 [MB] (14 MBps) [2024-11-03T04:51:46.784Z] Copying: 726/1024 [MB] (17 MBps) [2024-11-03T04:51:47.724Z] Copying: 745/1024 [MB] (19 MBps) [2024-11-03T04:51:48.691Z] Copying: 767/1024 [MB] (22 MBps) [2024-11-03T04:51:49.636Z] Copying: 791/1024 [MB] (23 MBps) [2024-11-03T04:51:50.578Z] Copying: 810/1024 [MB] (18 MBps) [2024-11-03T04:51:51.522Z] Copying: 827/1024 [MB] (17 MBps) [2024-11-03T04:51:52.909Z] Copying: 851/1024 [MB] (23 MBps) [2024-11-03T04:51:53.851Z] Copying: 866/1024 [MB] (14 MBps) [2024-11-03T04:51:54.789Z] Copying: 887/1024 [MB] (21 MBps) [2024-11-03T04:51:55.731Z] Copying: 912/1024 [MB] (25 MBps) [2024-11-03T04:51:56.673Z] Copying: 931/1024 [MB] (18 MBps) [2024-11-03T04:51:57.616Z] Copying: 943/1024 [MB] (12 MBps) [2024-11-03T04:51:58.561Z] Copying: 956/1024 [MB] (13 MBps) [2024-11-03T04:51:59.530Z] Copying: 966/1024 [MB] (10 MBps) [2024-11-03T04:52:00.516Z] Copying: 977/1024 [MB] (10 MBps) [2024-11-03T04:52:01.901Z] Copying: 987/1024 [MB] (10 MBps) [2024-11-03T04:52:02.842Z] Copying: 998/1024 [MB] (10 MBps) [2024-11-03T04:52:03.784Z] Copying: 1009/1024 [MB] (10 MBps) [2024-11-03T04:52:04.045Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-03T04:52:04.045Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 04:52:03.954374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.961 [2024-11-03 04:52:03.954462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:40.961 [2024-11-03 04:52:03.954485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:40.961 [2024-11-03 04:52:03.954497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.961 [2024-11-03 04:52:03.954532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.961 [2024-11-03 04:52:03.959330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.961 [2024-11-03 04:52:03.959381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:40.961 [2024-11-03 04:52:03.959396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:31:40.961 [2024-11-03 04:52:03.959406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.961 [2024-11-03 04:52:03.959711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.961 [2024-11-03 04:52:03.959726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:40.961 [2024-11-03 04:52:03.959739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:31:40.961 [2024-11-03 04:52:03.959750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.961 [2024-11-03 04:52:03.959786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.961 [2024-11-03 04:52:03.959798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:40.961 [2024-11-03 04:52:03.959813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:40.961 [2024-11-03 04:52:03.959823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.961 [2024-11-03 04:52:03.959892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.961 [2024-11-03 04:52:03.959905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:40.961 [2024-11-03 04:52:03.959916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:40.961 [2024-11-03 04:52:03.959926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.961 [2024-11-03 04:52:03.959944] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:40.961 [2024-11-03 04:52:03.959960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.959973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.959983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.959993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:40.961 [2024-11-03 04:52:03.960185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.960991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.961001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.961011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:40.962 [2024-11-03 04:52:03.961031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:40.962 [2024-11-03 04:52:03.961042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3 00:31:40.962 [2024-11-03 04:52:03.961056] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:40.962 [2024-11-03 04:52:03.961065] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:40.962 [2024-11-03 04:52:03.961075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:40.962 [2024-11-03 04:52:03.961084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:40.962 [2024-11-03 04:52:03.961094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:40.962 [2024-11-03 04:52:03.961104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:40.962 [2024-11-03 04:52:03.961114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:40.962 [2024-11-03 04:52:03.961122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:40.962 [2024-11-03 04:52:03.961130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:40.962 [2024-11-03 04:52:03.961139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.962 [2024-11-03 04:52:03.961149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:40.963 [2024-11-03 04:52:03.961160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:31:40.963 [2024-11-03 04:52:03.961171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.963 [2024-11-03 04:52:03.976683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.963 [2024-11-03 04:52:03.976731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:40.963 [2024-11-03 04:52:03.976744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.491 ms 00:31:40.963 [2024-11-03 04:52:03.976753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.963 [2024-11-03 04:52:03.977140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.963 [2024-11-03 04:52:03.977164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:40.963 [2024-11-03 04:52:03.977173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:31:40.963 [2024-11-03 04:52:03.977181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.963 [2024-11-03 04:52:04.013580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.963 [2024-11-03 04:52:04.013772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.963 [2024-11-03 04:52:04.013793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.963 [2024-11-03 04:52:04.013803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.963 [2024-11-03 04:52:04.013878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.963 [2024-11-03 04:52:04.013889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.963 [2024-11-03 04:52:04.013899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.963 [2024-11-03 04:52:04.013909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.963 [2024-11-03 04:52:04.013977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.963 [2024-11-03 04:52:04.013988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.963 [2024-11-03 04:52:04.013997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.963 [2024-11-03 04:52:04.014006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.963 [2024-11-03 04:52:04.014021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.963 [2024-11-03 04:52:04.014030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.963 [2024-11-03 04:52:04.014038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.963 [2024-11-03 04:52:04.014046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.099364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.099421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:41.223 [2024-11-03 04:52:04.099434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.223 [2024-11-03 04:52:04.099443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.169196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.169451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:41.223 [2024-11-03 04:52:04.169472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.223 [2024-11-03 04:52:04.169482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.169598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.169609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:41.223 [2024-11-03 04:52:04.169619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.223 [2024-11-03 04:52:04.169628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.169664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.169679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:41.223 [2024-11-03 04:52:04.169687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.223 [2024-11-03 04:52:04.169696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.169778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.169791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:41.223 [2024-11-03 04:52:04.169800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.223 [2024-11-03 04:52:04.169808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.169835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.169845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:41.223 [2024-11-03 04:52:04.169853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.223 [2024-11-03 04:52:04.169863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.223 [2024-11-03 04:52:04.169903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.223 [2024-11-03 04:52:04.169915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:41.223 [2024-11-03 04:52:04.169924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.224 [2024-11-03 04:52:04.169932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.224 [2024-11-03 04:52:04.169976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:41.224 [2024-11-03 04:52:04.169986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:41.224 [2024-11-03 04:52:04.169995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:41.224 [2024-11-03 04:52:04.170003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.224 [2024-11-03 04:52:04.170133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.733 ms, result 0 00:31:42.165 00:31:42.165 00:31:42.165 04:52:04 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:44.078 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:44.078 04:52:07 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:44.078 [2024-11-03 04:52:07.071768] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:31:44.078 [2024-11-03 04:52:07.071883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83503 ] 00:31:44.338 [2024-11-03 04:52:07.227895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.338 [2024-11-03 04:52:07.338202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.597 [2024-11-03 04:52:07.625021] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:44.597 [2024-11-03 04:52:07.625097] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:44.859 [2024-11-03 04:52:07.785654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.785713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:44.859 [2024-11-03 04:52:07.785732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:44.859 [2024-11-03 04:52:07.785742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.785796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.785808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:44.859 [2024-11-03 04:52:07.785819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:44.859 [2024-11-03 04:52:07.785827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.785848] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:44.859 [2024-11-03 04:52:07.786612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:44.859 [2024-11-03 04:52:07.786636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.786648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:44.859 [2024-11-03 04:52:07.786658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:31:44.859 [2024-11-03 04:52:07.786668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.786944] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:44.859 [2024-11-03 04:52:07.786973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.786982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:44.859 [2024-11-03 04:52:07.786997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:44.859 [2024-11-03 04:52:07.787006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.787058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.787068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:44.859 [2024-11-03 04:52:07.787079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:44.859 [2024-11-03 04:52:07.787088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.787535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.787550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:44.859 [2024-11-03 04:52:07.787583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:31:44.859 [2024-11-03 04:52:07.787592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.787663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.787674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:44.859 [2024-11-03 04:52:07.787682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:44.859 [2024-11-03 04:52:07.787690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.787714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.787724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:44.859 [2024-11-03 04:52:07.787734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:44.859 [2024-11-03 04:52:07.787745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.787766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:44.859 [2024-11-03 04:52:07.791941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.791987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:44.859 [2024-11-03 04:52:07.791997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:31:44.859 [2024-11-03 04:52:07.792005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.792041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.792050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:44.859 [2024-11-03 04:52:07.792059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:44.859 [2024-11-03 04:52:07.792067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.792123] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:44.859 [2024-11-03 04:52:07.792146] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:44.859 [2024-11-03 04:52:07.792187] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:44.859 [2024-11-03 04:52:07.792204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:44.859 [2024-11-03 04:52:07.792307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:44.859 [2024-11-03 04:52:07.792319] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:44.859 [2024-11-03 04:52:07.792331] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:44.859 [2024-11-03 04:52:07.792341] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:44.859 [2024-11-03 04:52:07.792352] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:44.859 [2024-11-03 04:52:07.792362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:44.859 [2024-11-03 04:52:07.792370] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:44.859 [2024-11-03 04:52:07.792380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:44.859 [2024-11-03 04:52:07.792388] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:44.859 [2024-11-03 04:52:07.792396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.792404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:44.859 [2024-11-03 04:52:07.792412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:31:44.859 [2024-11-03 04:52:07.792419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.792502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.859 [2024-11-03 04:52:07.792512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:44.859 [2024-11-03 04:52:07.792521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:44.859 [2024-11-03 04:52:07.792575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.859 [2024-11-03 04:52:07.792683] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:44.859 [2024-11-03 04:52:07.792696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:44.859 [2024-11-03 04:52:07.792705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:44.859 [2024-11-03 04:52:07.792714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:44.859 [2024-11-03 04:52:07.792724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:44.859 [2024-11-03 04:52:07.792731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:44.859 [2024-11-03 04:52:07.792739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:44.859 [2024-11-03 04:52:07.792748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:44.859 [2024-11-03 04:52:07.792755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:44.859 [2024-11-03 04:52:07.792763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:44.859 [2024-11-03 04:52:07.792771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:44.859 [2024-11-03 04:52:07.792778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:44.859 [2024-11-03 04:52:07.792786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:44.859 [2024-11-03 04:52:07.792794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:44.859 [2024-11-03 04:52:07.792801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:44.859 [2024-11-03 04:52:07.792808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:44.859 [2024-11-03 04:52:07.792814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:44.860 [2024-11-03 04:52:07.792828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:44.860 [2024-11-03 04:52:07.792835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:44.860 [2024-11-03 04:52:07.792848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:44.860 [2024-11-03 04:52:07.792863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:44.860 [2024-11-03 04:52:07.792871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:44.860 [2024-11-03 04:52:07.792884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:44.860 [2024-11-03 04:52:07.792890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:44.860 [2024-11-03 04:52:07.792904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:44.860 [2024-11-03 04:52:07.792910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:44.860 [2024-11-03 04:52:07.792922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:44.860 [2024-11-03 04:52:07.792929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:44.860 [2024-11-03 04:52:07.792946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:44.860 [2024-11-03 04:52:07.792953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:44.860 [2024-11-03 04:52:07.792959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:44.860 [2024-11-03 04:52:07.792966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:44.860 [2024-11-03 04:52:07.792972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:44.860 [2024-11-03 04:52:07.792979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:44.860 [2024-11-03 04:52:07.792985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:44.860 [2024-11-03 04:52:07.792992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:44.860 [2024-11-03 04:52:07.793001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:44.860 [2024-11-03 04:52:07.793009] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:44.860 [2024-11-03 04:52:07.793017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:44.860 [2024-11-03 04:52:07.793024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:44.860 [2024-11-03 04:52:07.793032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:44.860 [2024-11-03 04:52:07.793039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:44.860 [2024-11-03 04:52:07.793046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:44.860 [2024-11-03 04:52:07.793054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:44.860 [2024-11-03 04:52:07.793062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:44.860 [2024-11-03 04:52:07.793069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:44.860 [2024-11-03 04:52:07.793075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:44.860 [2024-11-03 04:52:07.793083] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:44.860 [2024-11-03 04:52:07.793092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:44.860 [2024-11-03 04:52:07.793111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:44.860 [2024-11-03 04:52:07.793117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:44.860 [2024-11-03 04:52:07.793124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:44.860 [2024-11-03 04:52:07.793132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:44.860 [2024-11-03 04:52:07.793140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:44.860 [2024-11-03 04:52:07.793148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:44.860 [2024-11-03 04:52:07.793155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:44.860 [2024-11-03 04:52:07.793162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:44.860 [2024-11-03 04:52:07.793169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:44.860 [2024-11-03 04:52:07.793205] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:44.860 [2024-11-03 04:52:07.793214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:44.860 [2024-11-03 04:52:07.793230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:44.860 [2024-11-03 04:52:07.793237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:44.860 [2024-11-03 04:52:07.793249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:44.860 [2024-11-03 04:52:07.793257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.793265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:44.860 [2024-11-03 04:52:07.793273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:31:44.860 [2024-11-03 04:52:07.793280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.820722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.820904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:44.860 [2024-11-03 04:52:07.821048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.398 ms 00:31:44.860 [2024-11-03 04:52:07.821073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.821172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.821196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:44.860 [2024-11-03 04:52:07.821217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:44.860 [2024-11-03 04:52:07.821242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.875332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.875528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:44.860 [2024-11-03 04:52:07.875715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.020 ms 00:31:44.860 [2024-11-03 04:52:07.875753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.875812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.875844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:44.860 [2024-11-03 04:52:07.875864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:44.860 [2024-11-03 04:52:07.875884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.876010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.876060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:44.860 [2024-11-03 04:52:07.876080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:44.860 [2024-11-03 04:52:07.876101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.876249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.876274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:44.860 [2024-11-03 04:52:07.876355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:31:44.860 [2024-11-03 04:52:07.876382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.891986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.892144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:44.860 [2024-11-03 04:52:07.892201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.568 ms 00:31:44.860 [2024-11-03 04:52:07.892225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.892395] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:44.860 [2024-11-03 04:52:07.892440] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:44.860 [2024-11-03 04:52:07.892546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.892601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:44.860 [2024-11-03 04:52:07.892628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:31:44.860 [2024-11-03 04:52:07.892655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.904956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.860 [2024-11-03 04:52:07.905098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:44.860 [2024-11-03 04:52:07.905154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.262 ms 00:31:44.860 [2024-11-03 04:52:07.905177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.860 [2024-11-03 04:52:07.905317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.905342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:44.861 [2024-11-03 04:52:07.905363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:44.861 [2024-11-03 04:52:07.905381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.905494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.905522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:44.861 [2024-11-03 04:52:07.905544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:44.861 [2024-11-03 04:52:07.905589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.906188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.906293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:44.861 [2024-11-03 04:52:07.906347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:31:44.861 [2024-11-03 04:52:07.906387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.906424] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:44.861 [2024-11-03 04:52:07.906503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.906578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:44.861 [2024-11-03 04:52:07.906604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:44.861 [2024-11-03 04:52:07.906624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.918997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:44.861 [2024-11-03 04:52:07.919266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.919300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:44.861 [2024-11-03 04:52:07.919365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.607 ms 00:31:44.861 [2024-11-03 04:52:07.919387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.921569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.921693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:44.861 [2024-11-03 04:52:07.921755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:31:44.861 [2024-11-03 04:52:07.921777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.921888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.921915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:44.861 [2024-11-03 04:52:07.921938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:44.861 [2024-11-03 04:52:07.922004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.922046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.922068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:44.861 [2024-11-03 04:52:07.922097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:44.861 [2024-11-03 04:52:07.922116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.861 [2024-11-03 04:52:07.922161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:44.861 [2024-11-03 04:52:07.922216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.861 [2024-11-03 04:52:07.922241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:44.861 [2024-11-03 04:52:07.922261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:44.861 [2024-11-03 04:52:07.922280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.122 [2024-11-03 04:52:07.949014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.122 [2024-11-03 04:52:07.949194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:45.122 [2024-11-03 04:52:07.949257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.696 ms 00:31:45.122 [2024-11-03 04:52:07.949282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.122 [2024-11-03 04:52:07.949373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.122 [2024-11-03 04:52:07.949400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:45.122 [2024-11-03 04:52:07.949421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:45.122 [2024-11-03 04:52:07.949440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.122 [2024-11-03 04:52:07.950775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 164.639 ms, result 0 00:31:46.066  [2024-11-03T04:52:10.090Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-03T04:52:11.034Z] Copying: 37/1024 [MB] (24 MBps) [2024-11-03T04:52:11.977Z] Copying: 48/1024 [MB] (10 MBps) [2024-11-03T04:52:13.363Z] Copying: 60/1024 [MB] (12 MBps) [2024-11-03T04:52:14.307Z] Copying: 74/1024 [MB] (13 MBps) [2024-11-03T04:52:15.303Z] Copying: 91/1024 [MB] (17 MBps) [2024-11-03T04:52:16.242Z] Copying: 105/1024 [MB] (14 MBps) [2024-11-03T04:52:17.176Z] Copying: 123/1024 [MB] (18 MBps) [2024-11-03T04:52:18.114Z] Copying: 163/1024 [MB] (39 MBps) [2024-11-03T04:52:19.055Z] Copying: 192/1024 [MB] (28 MBps) [2024-11-03T04:52:20.013Z] Copying: 207/1024 [MB] (15 MBps) [2024-11-03T04:52:21.399Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-03T04:52:21.970Z] Copying: 232/1024 [MB] (14 MBps) [2024-11-03T04:52:23.353Z] Copying: 248520/1048576 [kB] (10180 kBps) [2024-11-03T04:52:24.292Z] Copying: 257/1024 [MB] (14 MBps) [2024-11-03T04:52:25.235Z] Copying: 300/1024 [MB] (43 MBps) [2024-11-03T04:52:26.179Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-03T04:52:27.121Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-03T04:52:28.058Z] Copying: 342/1024 [MB] (20 MBps) [2024-11-03T04:52:29.002Z] Copying: 380/1024 [MB] (38 MBps) [2024-11-03T04:52:30.388Z] Copying: 396/1024 [MB] (15 MBps) [2024-11-03T04:52:31.331Z] Copying: 412/1024 [MB] (16 MBps) [2024-11-03T04:52:32.270Z] Copying: 432264/1048576 [kB] (10148 kBps) [2024-11-03T04:52:33.203Z] Copying: 439/1024 [MB] (16 MBps) [2024-11-03T04:52:34.136Z] Copying: 469/1024 [MB] (30 MBps) [2024-11-03T04:52:35.069Z] Copying: 500/1024 [MB] (31 MBps) [2024-11-03T04:52:36.000Z] Copying: 532/1024 [MB] (31 MBps) [2024-11-03T04:52:36.995Z] Copying: 563/1024 [MB] (31 MBps) [2024-11-03T04:52:38.380Z] Copying: 593/1024 [MB] (30 MBps) [2024-11-03T04:52:39.317Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-03T04:52:40.249Z] Copying: 618/1024 [MB] (14 MBps) [2024-11-03T04:52:41.181Z] Copying: 647/1024 [MB] (28 MBps) [2024-11-03T04:52:42.113Z] Copying: 674/1024 [MB] (26 MBps) [2024-11-03T04:52:43.045Z] Copying: 701/1024 [MB] (27 MBps) [2024-11-03T04:52:43.978Z] Copying: 729/1024 [MB] (27 MBps) [2024-11-03T04:52:45.350Z] Copying: 757/1024 [MB] (27 MBps) [2024-11-03T04:52:46.281Z] Copying: 783/1024 [MB] (26 MBps) [2024-11-03T04:52:47.213Z] Copying: 822/1024 [MB] (39 MBps) [2024-11-03T04:52:48.145Z] Copying: 851/1024 [MB] (28 MBps) [2024-11-03T04:52:49.078Z] Copying: 878/1024 [MB] (27 MBps) [2024-11-03T04:52:50.011Z] Copying: 904/1024 [MB] (26 MBps) [2024-11-03T04:52:51.390Z] Copying: 930/1024 [MB] (25 MBps) [2024-11-03T04:52:52.323Z] Copying: 945/1024 [MB] (15 MBps) [2024-11-03T04:52:53.255Z] Copying: 970/1024 [MB] (25 MBps) [2024-11-03T04:52:54.191Z] Copying: 997/1024 [MB] (26 MBps) [2024-11-03T04:52:55.128Z] Copying: 1023/1024 [MB] (25 MBps) [2024-11-03T04:52:55.128Z] Copying: 1048520/1048576 [kB] (904 kBps) [2024-11-03T04:52:55.128Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-03 04:52:55.037987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.045 [2024-11-03 04:52:55.038072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:32.045 [2024-11-03 04:52:55.038091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:32.045 [2024-11-03 04:52:55.038101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.045 [2024-11-03 04:52:55.041108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:32.045 [2024-11-03 04:52:55.045730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.045 [2024-11-03 04:52:55.045782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:32.045 [2024-11-03 04:52:55.045797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:32:32.045 [2024-11-03 04:52:55.045806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.045 [2024-11-03 04:52:55.061590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.045 [2024-11-03 04:52:55.061707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:32.045 [2024-11-03 04:52:55.061764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.556 ms 00:32:32.045 [2024-11-03 04:52:55.061791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.045 [2024-11-03 04:52:55.061872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.045 [2024-11-03 04:52:55.061900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:32.045 [2024-11-03 04:52:55.061928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:32.045 [2024-11-03 04:52:55.061951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.045 [2024-11-03 04:52:55.062071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.045 [2024-11-03 04:52:55.062097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:32.045 [2024-11-03 04:52:55.062124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:32.045 [2024-11-03 04:52:55.062153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.045 [2024-11-03 04:52:55.062192] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:32.045 [2024-11-03 04:52:55.062226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:32:32.045 [2024-11-03 04:52:55.062257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.062980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.063978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.064005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.064027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.064052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:32.045 [2024-11-03 04:52:55.064075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:32.046 [2024-11-03 04:52:55.064885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:32.046 [2024-11-03 04:52:55.064910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3 00:32:32.046 [2024-11-03 04:52:55.064936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:32:32.046 [2024-11-03 04:52:55.064959] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:32:32.046 [2024-11-03 04:52:55.064980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:32:32.046 [2024-11-03 04:52:55.065005] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:32.046 [2024-11-03 04:52:55.065025] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:32.046 [2024-11-03 04:52:55.065049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:32.046 [2024-11-03 04:52:55.065078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:32.046 [2024-11-03 04:52:55.065100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:32.046 [2024-11-03 04:52:55.065120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:32.046 [2024-11-03 04:52:55.065141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.046 [2024-11-03 04:52:55.065165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:32.046 [2024-11-03 04:52:55.065188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:32:32.046 [2024-11-03 04:52:55.065211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.046 [2024-11-03 04:52:55.080948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.046 [2024-11-03 04:52:55.080994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:32.046 [2024-11-03 04:52:55.081006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.683 ms 00:32:32.046 [2024-11-03 04:52:55.081014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.046 [2024-11-03 04:52:55.081424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:32.046 [2024-11-03 04:52:55.081436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:32.046 [2024-11-03 04:52:55.081446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:32:32.046 [2024-11-03 04:52:55.081453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.046 [2024-11-03 04:52:55.118059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.046 [2024-11-03 04:52:55.118105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:32.046 [2024-11-03 04:52:55.118123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.046 [2024-11-03 04:52:55.118131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.046 [2024-11-03 04:52:55.118201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.046 [2024-11-03 04:52:55.118210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:32.046 [2024-11-03 04:52:55.118218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.046 [2024-11-03 04:52:55.118226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.046 [2024-11-03 04:52:55.118302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.046 [2024-11-03 04:52:55.118315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:32.046 [2024-11-03 04:52:55.118323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.046 [2024-11-03 04:52:55.118336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.046 [2024-11-03 04:52:55.118353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.046 [2024-11-03 04:52:55.118362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:32.046 [2024-11-03 04:52:55.118370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.046 [2024-11-03 04:52:55.118378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.307 [2024-11-03 04:52:55.204154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.307 [2024-11-03 04:52:55.204213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:32.307 [2024-11-03 04:52:55.204226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.307 [2024-11-03 04:52:55.204242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.307 [2024-11-03 04:52:55.272888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.307 [2024-11-03 04:52:55.272950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:32.307 [2024-11-03 04:52:55.272962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.307 [2024-11-03 04:52:55.272978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.307 [2024-11-03 04:52:55.273040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.308 [2024-11-03 04:52:55.273050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:32.308 [2024-11-03 04:52:55.273060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.308 [2024-11-03 04:52:55.273068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.308 [2024-11-03 04:52:55.273130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.308 [2024-11-03 04:52:55.273142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:32.308 [2024-11-03 04:52:55.273151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.308 [2024-11-03 04:52:55.273159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.308 [2024-11-03 04:52:55.273238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.308 [2024-11-03 04:52:55.273247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:32.308 [2024-11-03 04:52:55.273256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.308 [2024-11-03 04:52:55.273264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.308 [2024-11-03 04:52:55.273296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.308 [2024-11-03 04:52:55.273312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:32.308 [2024-11-03 04:52:55.273320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.308 [2024-11-03 04:52:55.273328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.308 [2024-11-03 04:52:55.273371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.308 [2024-11-03 04:52:55.273383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:32.308 [2024-11-03 04:52:55.273392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.308 [2024-11-03 04:52:55.273400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.308 [2024-11-03 04:52:55.273453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:32.308 [2024-11-03 04:52:55.273465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:32.308 [2024-11-03 04:52:55.273474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:32.308 [2024-11-03 04:52:55.273484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:32.308 [2024-11-03 04:52:55.273642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 237.939 ms, result 0 00:32:33.751 00:32:33.751 00:32:33.751 04:52:56 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:33.751 [2024-11-03 04:52:56.801760] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:32:33.751 [2024-11-03 04:52:56.801905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83998 ] 00:32:34.012 [2024-11-03 04:52:56.964991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.012 [2024-11-03 04:52:57.082963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.585 [2024-11-03 04:52:57.374628] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:34.585 [2024-11-03 04:52:57.374698] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:34.585 [2024-11-03 04:52:57.534319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.534375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:34.585 [2024-11-03 04:52:57.534395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:34.585 [2024-11-03 04:52:57.534403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.534460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.534472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:34.585 [2024-11-03 04:52:57.534483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:34.585 [2024-11-03 04:52:57.534492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.534513] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:34.585 [2024-11-03 04:52:57.535295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:34.585 [2024-11-03 04:52:57.535327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.535338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:34.585 [2024-11-03 04:52:57.535348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:32:34.585 [2024-11-03 04:52:57.535357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.535669] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:34.585 [2024-11-03 04:52:57.535711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.535721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:34.585 [2024-11-03 04:52:57.535734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:34.585 [2024-11-03 04:52:57.535742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.535798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.535809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:34.585 [2024-11-03 04:52:57.535817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:34.585 [2024-11-03 04:52:57.535825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.536138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.536152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:34.585 [2024-11-03 04:52:57.536165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:32:34.585 [2024-11-03 04:52:57.536174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.536245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.536256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:34.585 [2024-11-03 04:52:57.536265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:34.585 [2024-11-03 04:52:57.536273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.536297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.536308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:34.585 [2024-11-03 04:52:57.536317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:34.585 [2024-11-03 04:52:57.536328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.536350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:34.585 [2024-11-03 04:52:57.540929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.540975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:34.585 [2024-11-03 04:52:57.540987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:32:34.585 [2024-11-03 04:52:57.540995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.541032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.541041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:34.585 [2024-11-03 04:52:57.541050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:34.585 [2024-11-03 04:52:57.541058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.541119] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:34.585 [2024-11-03 04:52:57.541145] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:34.585 [2024-11-03 04:52:57.541185] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:34.585 [2024-11-03 04:52:57.541201] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:34.585 [2024-11-03 04:52:57.541308] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:34.585 [2024-11-03 04:52:57.541319] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:34.585 [2024-11-03 04:52:57.541330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:34.585 [2024-11-03 04:52:57.541341] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:34.585 [2024-11-03 04:52:57.541350] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:34.585 [2024-11-03 04:52:57.541358] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:34.585 [2024-11-03 04:52:57.541366] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:34.585 [2024-11-03 04:52:57.541376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:34.585 [2024-11-03 04:52:57.541383] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:34.585 [2024-11-03 04:52:57.541391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.541399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:34.585 [2024-11-03 04:52:57.541407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:32:34.585 [2024-11-03 04:52:57.541416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.541502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.585 [2024-11-03 04:52:57.541511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:34.585 [2024-11-03 04:52:57.541519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:34.585 [2024-11-03 04:52:57.541526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.585 [2024-11-03 04:52:57.541650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:34.585 [2024-11-03 04:52:57.541662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:34.585 [2024-11-03 04:52:57.541671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:34.585 [2024-11-03 04:52:57.541680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.585 [2024-11-03 04:52:57.541692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:34.585 [2024-11-03 04:52:57.541699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:34.585 [2024-11-03 04:52:57.541706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:34.585 [2024-11-03 04:52:57.541713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:34.585 [2024-11-03 04:52:57.541720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:34.585 [2024-11-03 04:52:57.541727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:34.585 [2024-11-03 04:52:57.541734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:34.585 [2024-11-03 04:52:57.541741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:34.585 [2024-11-03 04:52:57.541748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:34.585 [2024-11-03 04:52:57.541755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:34.586 [2024-11-03 04:52:57.541763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:34.586 [2024-11-03 04:52:57.541769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:34.586 [2024-11-03 04:52:57.541789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:34.586 [2024-11-03 04:52:57.541798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:34.586 [2024-11-03 04:52:57.541814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.586 [2024-11-03 04:52:57.541828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:34.586 [2024-11-03 04:52:57.541836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.586 [2024-11-03 04:52:57.541851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:34.586 [2024-11-03 04:52:57.541858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.586 [2024-11-03 04:52:57.541873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:34.586 [2024-11-03 04:52:57.541880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.586 [2024-11-03 04:52:57.541894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:34.586 [2024-11-03 04:52:57.541901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:34.586 [2024-11-03 04:52:57.541914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:34.586 [2024-11-03 04:52:57.541921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:34.586 [2024-11-03 04:52:57.541929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:34.586 [2024-11-03 04:52:57.541936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:34.586 [2024-11-03 04:52:57.541943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:34.586 [2024-11-03 04:52:57.541950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:34.586 [2024-11-03 04:52:57.541964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:34.586 [2024-11-03 04:52:57.541970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.586 [2024-11-03 04:52:57.541976] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:34.586 [2024-11-03 04:52:57.541984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:34.586 [2024-11-03 04:52:57.541991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:34.586 [2024-11-03 04:52:57.541999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.586 [2024-11-03 04:52:57.542007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:34.586 [2024-11-03 04:52:57.542014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:34.586 [2024-11-03 04:52:57.542021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:34.586 [2024-11-03 04:52:57.542029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:34.586 [2024-11-03 04:52:57.542035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:34.586 [2024-11-03 04:52:57.542043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:34.586 [2024-11-03 04:52:57.542052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:34.586 [2024-11-03 04:52:57.542064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:34.586 [2024-11-03 04:52:57.542084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:34.586 [2024-11-03 04:52:57.542092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:34.586 [2024-11-03 04:52:57.542100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:34.586 [2024-11-03 04:52:57.542107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:34.586 [2024-11-03 04:52:57.542115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:34.586 [2024-11-03 04:52:57.542122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:34.586 [2024-11-03 04:52:57.542130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:34.586 [2024-11-03 04:52:57.542138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:34.586 [2024-11-03 04:52:57.542146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:34.586 [2024-11-03 04:52:57.542186] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:34.586 [2024-11-03 04:52:57.542195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:34.586 [2024-11-03 04:52:57.542212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:34.586 [2024-11-03 04:52:57.542221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:34.586 [2024-11-03 04:52:57.542229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:34.586 [2024-11-03 04:52:57.542237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.542245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:34.586 [2024-11-03 04:52:57.542253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:32:34.586 [2024-11-03 04:52:57.542261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.570070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.570114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:34.586 [2024-11-03 04:52:57.570127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.768 ms 00:32:34.586 [2024-11-03 04:52:57.570135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.570221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.570231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:34.586 [2024-11-03 04:52:57.570240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:34.586 [2024-11-03 04:52:57.570252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.616599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.616648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:34.586 [2024-11-03 04:52:57.616662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.289 ms 00:32:34.586 [2024-11-03 04:52:57.616671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.616720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.616734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:34.586 [2024-11-03 04:52:57.616744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:34.586 [2024-11-03 04:52:57.616752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.616869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.616883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:34.586 [2024-11-03 04:52:57.616893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:34.586 [2024-11-03 04:52:57.616901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.617032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.617045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:34.586 [2024-11-03 04:52:57.617059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:34.586 [2024-11-03 04:52:57.617069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.632854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.632899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:34.586 [2024-11-03 04:52:57.632911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.765 ms 00:32:34.586 [2024-11-03 04:52:57.632919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.633076] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:34.586 [2024-11-03 04:52:57.633092] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:34.586 [2024-11-03 04:52:57.633102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.633110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:34.586 [2024-11-03 04:52:57.633124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:34.586 [2024-11-03 04:52:57.633132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.586 [2024-11-03 04:52:57.645476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.586 [2024-11-03 04:52:57.645534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:34.587 [2024-11-03 04:52:57.645547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.323 ms 00:32:34.587 [2024-11-03 04:52:57.645555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.645700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.645711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:34.587 [2024-11-03 04:52:57.645720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:32:34.587 [2024-11-03 04:52:57.645729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.645789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.645800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:34.587 [2024-11-03 04:52:57.645809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:34.587 [2024-11-03 04:52:57.645817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.646406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.646419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:34.587 [2024-11-03 04:52:57.646430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:32:34.587 [2024-11-03 04:52:57.646438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.646455] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:34.587 [2024-11-03 04:52:57.646469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.646478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:34.587 [2024-11-03 04:52:57.646486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:34.587 [2024-11-03 04:52:57.646494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.660545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:34.587 [2024-11-03 04:52:57.660717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.660730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:34.587 [2024-11-03 04:52:57.660740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.204 ms 00:32:34.587 [2024-11-03 04:52:57.660749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.662973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.663002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:34.587 [2024-11-03 04:52:57.663017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:32:34.587 [2024-11-03 04:52:57.663025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.663105] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:34.587 [2024-11-03 04:52:57.663580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.663593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:34.587 [2024-11-03 04:52:57.663604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:32:34.587 [2024-11-03 04:52:57.663613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.663641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.663659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:34.587 [2024-11-03 04:52:57.663668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:34.587 [2024-11-03 04:52:57.663677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.587 [2024-11-03 04:52:57.663710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:34.587 [2024-11-03 04:52:57.663722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.587 [2024-11-03 04:52:57.663730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:34.587 [2024-11-03 04:52:57.663739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:34.587 [2024-11-03 04:52:57.663747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.849 [2024-11-03 04:52:57.690706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.849 [2024-11-03 04:52:57.690760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:34.849 [2024-11-03 04:52:57.690774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.940 ms 00:32:34.849 [2024-11-03 04:52:57.690782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.849 [2024-11-03 04:52:57.690874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.849 [2024-11-03 04:52:57.690885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:34.849 [2024-11-03 04:52:57.690895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:34.849 [2024-11-03 04:52:57.690904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.849 [2024-11-03 04:52:57.692144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.351 ms, result 0 00:32:36.232  [2024-11-03T04:53:00.255Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-03T04:53:01.194Z] Copying: 33/1024 [MB] (14 MBps) [2024-11-03T04:53:02.138Z] Copying: 48/1024 [MB] (14 MBps) [2024-11-03T04:53:03.079Z] Copying: 60/1024 [MB] (12 MBps) [2024-11-03T04:53:04.022Z] Copying: 73/1024 [MB] (13 MBps) [2024-11-03T04:53:04.966Z] Copying: 86/1024 [MB] (12 MBps) [2024-11-03T04:53:05.907Z] Copying: 99/1024 [MB] (13 MBps) [2024-11-03T04:53:07.290Z] Copying: 115/1024 [MB] (15 MBps) [2024-11-03T04:53:08.231Z] Copying: 126/1024 [MB] (11 MBps) [2024-11-03T04:53:09.171Z] Copying: 137/1024 [MB] (10 MBps) [2024-11-03T04:53:10.117Z] Copying: 150/1024 [MB] (13 MBps) [2024-11-03T04:53:11.060Z] Copying: 168/1024 [MB] (17 MBps) [2024-11-03T04:53:12.006Z] Copying: 185/1024 [MB] (17 MBps) [2024-11-03T04:53:12.951Z] Copying: 203/1024 [MB] (17 MBps) [2024-11-03T04:53:13.896Z] Copying: 222/1024 [MB] (19 MBps) [2024-11-03T04:53:15.284Z] Copying: 237/1024 [MB] (14 MBps) [2024-11-03T04:53:16.266Z] Copying: 258/1024 [MB] (20 MBps) [2024-11-03T04:53:17.209Z] Copying: 278/1024 [MB] (20 MBps) [2024-11-03T04:53:18.151Z] Copying: 295/1024 [MB] (16 MBps) [2024-11-03T04:53:19.094Z] Copying: 310/1024 [MB] (14 MBps) [2024-11-03T04:53:20.036Z] Copying: 328/1024 [MB] (18 MBps) [2024-11-03T04:53:20.980Z] Copying: 341/1024 [MB] (12 MBps) [2024-11-03T04:53:21.926Z] Copying: 355/1024 [MB] (13 MBps) [2024-11-03T04:53:23.314Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-03T04:53:23.888Z] Copying: 383/1024 [MB] (16 MBps) [2024-11-03T04:53:25.275Z] Copying: 400/1024 [MB] (16 MBps) [2024-11-03T04:53:26.218Z] Copying: 417/1024 [MB] (17 MBps) [2024-11-03T04:53:27.160Z] Copying: 434/1024 [MB] (16 MBps) [2024-11-03T04:53:28.102Z] Copying: 452/1024 [MB] (18 MBps) [2024-11-03T04:53:29.045Z] Copying: 470/1024 [MB] (18 MBps) [2024-11-03T04:53:29.988Z] Copying: 485/1024 [MB] (14 MBps) [2024-11-03T04:53:30.930Z] Copying: 499/1024 [MB] (14 MBps) [2024-11-03T04:53:32.313Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-03T04:53:33.256Z] Copying: 524/1024 [MB] (13 MBps) [2024-11-03T04:53:34.196Z] Copying: 535/1024 [MB] (11 MBps) [2024-11-03T04:53:35.139Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-03T04:53:36.083Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-03T04:53:37.084Z] Copying: 572/1024 [MB] (14 MBps) [2024-11-03T04:53:38.028Z] Copying: 583/1024 [MB] (11 MBps) [2024-11-03T04:53:38.963Z] Copying: 597/1024 [MB] (13 MBps) [2024-11-03T04:53:39.905Z] Copying: 614/1024 [MB] (17 MBps) [2024-11-03T04:53:41.290Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-03T04:53:42.234Z] Copying: 642/1024 [MB] (17 MBps) [2024-11-03T04:53:43.175Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-03T04:53:44.116Z] Copying: 664/1024 [MB] (11 MBps) [2024-11-03T04:53:45.058Z] Copying: 675/1024 [MB] (10 MBps) [2024-11-03T04:53:45.991Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-03T04:53:46.930Z] Copying: 706/1024 [MB] (20 MBps) [2024-11-03T04:53:48.309Z] Copying: 723/1024 [MB] (17 MBps) [2024-11-03T04:53:49.249Z] Copying: 742/1024 [MB] (19 MBps) [2024-11-03T04:53:50.182Z] Copying: 766/1024 [MB] (23 MBps) [2024-11-03T04:53:51.115Z] Copying: 780/1024 [MB] (13 MBps) [2024-11-03T04:53:52.054Z] Copying: 799/1024 [MB] (18 MBps) [2024-11-03T04:53:52.998Z] Copying: 816/1024 [MB] (16 MBps) [2024-11-03T04:53:53.942Z] Copying: 827/1024 [MB] (10 MBps) [2024-11-03T04:53:55.324Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-03T04:53:55.897Z] Copying: 852/1024 [MB] (14 MBps) [2024-11-03T04:53:56.892Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-03T04:53:58.275Z] Copying: 875/1024 [MB] (12 MBps) [2024-11-03T04:53:59.211Z] Copying: 889/1024 [MB] (13 MBps) [2024-11-03T04:54:00.153Z] Copying: 909/1024 [MB] (20 MBps) [2024-11-03T04:54:01.092Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-03T04:54:02.034Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-03T04:54:02.975Z] Copying: 942/1024 [MB] (10 MBps) [2024-11-03T04:54:03.916Z] Copying: 953/1024 [MB] (11 MBps) [2024-11-03T04:54:05.304Z] Copying: 972/1024 [MB] (19 MBps) [2024-11-03T04:54:06.249Z] Copying: 988/1024 [MB] (16 MBps) [2024-11-03T04:54:07.193Z] Copying: 1007/1024 [MB] (18 MBps) [2024-11-03T04:54:07.193Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-03T04:54:07.193Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-03 04:54:06.983949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.109 [2024-11-03 04:54:06.984033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:44.109 [2024-11-03 04:54:06.984058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:44.109 [2024-11-03 04:54:06.984073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.109 [2024-11-03 04:54:06.984111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:44.109 [2024-11-03 04:54:06.989142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.109 [2024-11-03 04:54:06.989179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:44.109 [2024-11-03 04:54:06.989191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.005 ms 00:33:44.109 [2024-11-03 04:54:06.989201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.109 [2024-11-03 04:54:06.990533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.109 [2024-11-03 04:54:06.990570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:44.109 [2024-11-03 04:54:06.990583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:33:44.109 [2024-11-03 04:54:06.990592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.109 [2024-11-03 04:54:06.990625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.109 [2024-11-03 04:54:06.990638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:44.109 [2024-11-03 04:54:06.990648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:44.109 [2024-11-03 04:54:06.990659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.109 [2024-11-03 04:54:06.990717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.109 [2024-11-03 04:54:06.990728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:44.109 [2024-11-03 04:54:06.990741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:44.109 [2024-11-03 04:54:06.990751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.109 [2024-11-03 04:54:06.990767] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:44.109 [2024-11-03 04:54:06.990782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:44.109 [2024-11-03 04:54:06.990794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:44.109 [2024-11-03 04:54:06.990961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.990971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.990980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.990990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.990999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:44.110 [2024-11-03 04:54:06.991781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:44.110 [2024-11-03 04:54:06.991791] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3 00:33:44.110 [2024-11-03 04:54:06.991801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:44.110 [2024-11-03 04:54:06.991810] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:33:44.110 [2024-11-03 04:54:06.991819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:33:44.110 [2024-11-03 04:54:06.991829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:33:44.110 [2024-11-03 04:54:06.991928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:44.110 [2024-11-03 04:54:06.991941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:44.110 [2024-11-03 04:54:06.991951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:44.110 [2024-11-03 04:54:06.991959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:44.110 [2024-11-03 04:54:06.991968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:44.110 [2024-11-03 04:54:06.991976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.110 [2024-11-03 04:54:06.991985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:44.110 [2024-11-03 04:54:06.991995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:33:44.110 [2024-11-03 04:54:06.992004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.007103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.110 [2024-11-03 04:54:07.007136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:44.110 [2024-11-03 04:54:07.007148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.052 ms 00:33:44.110 [2024-11-03 04:54:07.007162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.007528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.110 [2024-11-03 04:54:07.007547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:44.110 [2024-11-03 04:54:07.007570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:33:44.110 [2024-11-03 04:54:07.007579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.043513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.110 [2024-11-03 04:54:07.043574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:44.110 [2024-11-03 04:54:07.043589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.110 [2024-11-03 04:54:07.043599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.043673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.110 [2024-11-03 04:54:07.043683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:44.110 [2024-11-03 04:54:07.043693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.110 [2024-11-03 04:54:07.043703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.043765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.110 [2024-11-03 04:54:07.043776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:44.110 [2024-11-03 04:54:07.043786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.110 [2024-11-03 04:54:07.043798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.043817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.110 [2024-11-03 04:54:07.043827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:44.110 [2024-11-03 04:54:07.043838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.110 [2024-11-03 04:54:07.043847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.110 [2024-11-03 04:54:07.127198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.110 [2024-11-03 04:54:07.127247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:44.110 [2024-11-03 04:54:07.127266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.110 [2024-11-03 04:54:07.127275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.195599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.195646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:44.370 [2024-11-03 04:54:07.195665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.195674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.195755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.195766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:44.370 [2024-11-03 04:54:07.195776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.195784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.195825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.195835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:44.370 [2024-11-03 04:54:07.195845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.195854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.195934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.195944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:44.370 [2024-11-03 04:54:07.195954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.195962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.195993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.196002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:44.370 [2024-11-03 04:54:07.196010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.196017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.196067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.196076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:44.370 [2024-11-03 04:54:07.196085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.196093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.196143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.370 [2024-11-03 04:54:07.196154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:44.370 [2024-11-03 04:54:07.196163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.370 [2024-11-03 04:54:07.196171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.370 [2024-11-03 04:54:07.196301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 212.328 ms, result 0 00:33:44.941 00:33:44.941 00:33:44.941 04:54:07 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:47.485 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:47.485 Process with pid 81993 is not found 00:33:47.485 Remove shared memory files 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81993 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 81993 ']' 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 81993 00:33:47.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (81993) - No such process 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- common/autotest_common.sh@979 -- # echo 'Process with pid 81993 is not found' 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_band_md /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_l2p_l1 /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_l2p_l2 /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_l2p_l2_ctx /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_nvc_md /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_p2l_pool /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_sb /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_sb_shm /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_trim_bitmap /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_trim_log /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_trim_md /dev/hugepages/ftl_aa7bdfd2-f25d-4ec2-8c76-aa765e7ae8a3_vmap 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:47.485 00:33:47.485 real 4m30.551s 00:33:47.485 user 4m18.378s 00:33:47.485 sys 0m12.440s 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:47.485 ************************************ 00:33:47.485 04:54:10 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:47.485 END TEST ftl_restore_fast 00:33:47.485 ************************************ 00:33:47.485 04:54:10 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:47.485 04:54:10 ftl -- ftl/ftl.sh@14 -- # killprocess 72230 00:33:47.485 04:54:10 ftl -- common/autotest_common.sh@952 -- # '[' -z 72230 ']' 00:33:47.485 04:54:10 ftl -- common/autotest_common.sh@956 -- # kill -0 72230 00:33:47.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72230) - No such process 00:33:47.485 Process with pid 72230 is not found 00:33:47.485 04:54:10 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72230 is not found' 00:33:47.485 04:54:10 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:47.485 04:54:10 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84753 00:33:47.485 04:54:10 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84753 00:33:47.485 04:54:10 ftl -- common/autotest_common.sh@833 -- # '[' -z 84753 ']' 00:33:47.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.485 04:54:10 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.485 04:54:10 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:47.486 04:54:10 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:47.486 04:54:10 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.486 04:54:10 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:47.486 04:54:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:47.486 [2024-11-03 04:54:10.503976] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:33:47.486 [2024-11-03 04:54:10.504096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84753 ] 00:33:47.746 [2024-11-03 04:54:10.666360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.746 [2024-11-03 04:54:10.767283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.687 04:54:11 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:48.687 04:54:11 ftl -- common/autotest_common.sh@866 -- # return 0 00:33:48.687 04:54:11 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:48.687 nvme0n1 00:33:48.687 04:54:11 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:48.687 04:54:11 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:48.687 04:54:11 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:48.947 04:54:11 ftl -- ftl/common.sh@28 -- # stores=4fbf60bb-d817-490e-a2f0-08fc990e8c1f 00:33:48.947 04:54:11 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:48.947 04:54:11 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4fbf60bb-d817-490e-a2f0-08fc990e8c1f 00:33:49.208 04:54:12 ftl -- ftl/ftl.sh@23 -- # killprocess 84753 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@952 -- # '[' -z 84753 ']' 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@956 -- # kill -0 84753 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@957 -- # uname 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84753 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:49.208 killing process with pid 84753 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84753' 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@971 -- # kill 84753 00:33:49.208 04:54:12 ftl -- common/autotest_common.sh@976 -- # wait 84753 00:33:50.589 04:54:13 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:50.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:50.852 Waiting for block devices as requested 00:33:50.852 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:50.852 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:51.115 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:51.115 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:56.472 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:56.472 04:54:19 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:56.472 Remove shared memory files 00:33:56.472 04:54:19 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:56.472 04:54:19 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:56.472 04:54:19 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:56.472 04:54:19 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:56.472 04:54:19 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:56.472 04:54:19 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:56.472 ************************************ 00:33:56.472 END TEST ftl 00:33:56.472 ************************************ 00:33:56.472 00:33:56.472 real 19m3.215s 00:33:56.472 user 21m4.056s 00:33:56.472 sys 1m36.519s 00:33:56.472 04:54:19 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:56.472 04:54:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 04:54:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:56.472 04:54:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:56.472 04:54:19 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:56.472 04:54:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:56.472 04:54:19 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:56.472 04:54:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:56.472 04:54:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:56.472 04:54:19 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:56.472 04:54:19 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:56.472 04:54:19 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:56.472 04:54:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:56.472 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:33:56.472 04:54:19 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:56.472 04:54:19 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:33:56.472 04:54:19 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:33:56.472 04:54:19 -- common/autotest_common.sh@10 -- # set +x 00:33:57.856 INFO: APP EXITING 00:33:57.856 INFO: killing all VMs 00:33:57.856 INFO: killing vhost app 00:33:57.856 INFO: EXIT DONE 00:33:58.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:58.689 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:58.689 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:58.689 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:58.689 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:58.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:59.523 Cleaning 00:33:59.523 Removing: /var/run/dpdk/spdk0/config 00:33:59.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:59.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:59.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:59.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:59.523 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:59.523 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:59.523 Removing: /var/run/dpdk/spdk0 00:33:59.523 Removing: /var/run/dpdk/spdk_pid56966 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57168 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57375 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57468 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57508 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57625 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57643 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57836 00:33:59.523 Removing: /var/run/dpdk/spdk_pid57923 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58013 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58119 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58205 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58250 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58281 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58357 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58457 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58889 00:33:59.523 Removing: /var/run/dpdk/spdk_pid58953 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59005 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59021 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59112 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59128 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59219 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59235 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59288 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59306 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59358 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59372 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59526 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59568 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59652 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59824 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59902 00:33:59.523 Removing: /var/run/dpdk/spdk_pid59939 00:33:59.523 Removing: /var/run/dpdk/spdk_pid60361 00:33:59.523 Removing: /var/run/dpdk/spdk_pid60459 00:33:59.523 Removing: /var/run/dpdk/spdk_pid60570 00:33:59.523 Removing: /var/run/dpdk/spdk_pid60624 00:33:59.523 Removing: /var/run/dpdk/spdk_pid60644 00:33:59.523 Removing: /var/run/dpdk/spdk_pid60728 00:33:59.523 Removing: /var/run/dpdk/spdk_pid61346 00:33:59.523 Removing: /var/run/dpdk/spdk_pid61388 00:33:59.523 Removing: /var/run/dpdk/spdk_pid61867 00:33:59.523 Removing: /var/run/dpdk/spdk_pid61965 00:33:59.523 Removing: /var/run/dpdk/spdk_pid62074 00:33:59.523 Removing: /var/run/dpdk/spdk_pid62127 00:33:59.523 Removing: /var/run/dpdk/spdk_pid62147 00:33:59.523 Removing: /var/run/dpdk/spdk_pid62178 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64014 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64150 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64155 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64167 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64207 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64211 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64223 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64268 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64272 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64284 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64329 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64333 00:33:59.523 Removing: /var/run/dpdk/spdk_pid64345 00:33:59.523 Removing: /var/run/dpdk/spdk_pid65711 00:33:59.523 Removing: /var/run/dpdk/spdk_pid65808 00:33:59.523 Removing: /var/run/dpdk/spdk_pid67216 00:33:59.523 Removing: /var/run/dpdk/spdk_pid68586 00:33:59.523 Removing: /var/run/dpdk/spdk_pid68673 00:33:59.523 Removing: /var/run/dpdk/spdk_pid68749 00:33:59.523 Removing: /var/run/dpdk/spdk_pid68832 00:33:59.523 Removing: /var/run/dpdk/spdk_pid68931 00:33:59.523 Removing: /var/run/dpdk/spdk_pid69005 00:33:59.523 Removing: /var/run/dpdk/spdk_pid69146 00:33:59.523 Removing: /var/run/dpdk/spdk_pid69506 00:33:59.523 Removing: /var/run/dpdk/spdk_pid69537 00:33:59.785 Removing: /var/run/dpdk/spdk_pid69993 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70173 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70271 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70381 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70434 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70454 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70751 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70812 00:33:59.785 Removing: /var/run/dpdk/spdk_pid70885 00:33:59.785 Removing: /var/run/dpdk/spdk_pid71271 00:33:59.785 Removing: /var/run/dpdk/spdk_pid71420 00:33:59.785 Removing: /var/run/dpdk/spdk_pid72230 00:33:59.785 Removing: /var/run/dpdk/spdk_pid72362 00:33:59.785 Removing: /var/run/dpdk/spdk_pid72527 00:33:59.785 Removing: /var/run/dpdk/spdk_pid72630 00:33:59.785 Removing: /var/run/dpdk/spdk_pid72939 00:33:59.785 Removing: /var/run/dpdk/spdk_pid73204 00:33:59.785 Removing: /var/run/dpdk/spdk_pid73571 00:33:59.785 Removing: /var/run/dpdk/spdk_pid73752 00:33:59.785 Removing: /var/run/dpdk/spdk_pid73933 00:33:59.785 Removing: /var/run/dpdk/spdk_pid73990 00:33:59.785 Removing: /var/run/dpdk/spdk_pid74166 00:33:59.785 Removing: /var/run/dpdk/spdk_pid74197 00:33:59.785 Removing: /var/run/dpdk/spdk_pid74255 00:33:59.785 Removing: /var/run/dpdk/spdk_pid74589 00:33:59.785 Removing: /var/run/dpdk/spdk_pid74808 00:33:59.785 Removing: /var/run/dpdk/spdk_pid75547 00:33:59.785 Removing: /var/run/dpdk/spdk_pid76302 00:33:59.785 Removing: /var/run/dpdk/spdk_pid76989 00:33:59.785 Removing: /var/run/dpdk/spdk_pid77841 00:33:59.785 Removing: /var/run/dpdk/spdk_pid77994 00:33:59.785 Removing: /var/run/dpdk/spdk_pid78071 00:33:59.785 Removing: /var/run/dpdk/spdk_pid78696 00:33:59.785 Removing: /var/run/dpdk/spdk_pid78758 00:33:59.785 Removing: /var/run/dpdk/spdk_pid79428 00:33:59.785 Removing: /var/run/dpdk/spdk_pid80077 00:33:59.785 Removing: /var/run/dpdk/spdk_pid80955 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81077 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81119 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81177 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81235 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81288 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81476 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81570 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81638 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81699 00:33:59.785 Removing: /var/run/dpdk/spdk_pid81738 00:33:59.786 Removing: /var/run/dpdk/spdk_pid81846 00:33:59.786 Removing: /var/run/dpdk/spdk_pid81993 00:33:59.786 Removing: /var/run/dpdk/spdk_pid82214 00:33:59.786 Removing: /var/run/dpdk/spdk_pid82760 00:33:59.786 Removing: /var/run/dpdk/spdk_pid83503 00:33:59.786 Removing: /var/run/dpdk/spdk_pid83998 00:33:59.786 Removing: /var/run/dpdk/spdk_pid84753 00:33:59.786 Clean 00:33:59.786 04:54:22 -- common/autotest_common.sh@1451 -- # return 0 00:33:59.786 04:54:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:59.786 04:54:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.786 04:54:22 -- common/autotest_common.sh@10 -- # set +x 00:33:59.786 04:54:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:59.786 04:54:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.786 04:54:22 -- common/autotest_common.sh@10 -- # set +x 00:34:00.047 04:54:22 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:00.047 04:54:22 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:00.047 04:54:22 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:00.047 04:54:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:00.047 04:54:22 -- spdk/autotest.sh@394 -- # hostname 00:34:00.047 04:54:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:00.047 geninfo: WARNING: invalid characters removed from testname! 00:34:26.629 04:54:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:28.529 04:54:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:30.429 04:54:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:31.803 04:54:54 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:34.348 04:54:57 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:37.652 04:55:00 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:40.197 04:55:02 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:40.197 04:55:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:40.197 04:55:02 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:40.197 04:55:02 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:40.197 04:55:02 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:40.197 04:55:02 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:40.197 + [[ -n 5028 ]] 00:34:40.197 + sudo kill 5028 00:34:40.236 [Pipeline] } 00:34:40.252 [Pipeline] // timeout 00:34:40.263 [Pipeline] } 00:34:40.278 [Pipeline] // stage 00:34:40.283 [Pipeline] } 00:34:40.296 [Pipeline] // catchError 00:34:40.305 [Pipeline] stage 00:34:40.307 [Pipeline] { (Stop VM) 00:34:40.319 [Pipeline] sh 00:34:40.603 + vagrant halt 00:34:43.147 ==> default: Halting domain... 00:34:48.455 [Pipeline] sh 00:34:48.738 + vagrant destroy -f 00:34:51.287 ==> default: Removing domain... 00:34:52.243 [Pipeline] sh 00:34:52.528 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:52.538 [Pipeline] } 00:34:52.552 [Pipeline] // stage 00:34:52.557 [Pipeline] } 00:34:52.571 [Pipeline] // dir 00:34:52.576 [Pipeline] } 00:34:52.590 [Pipeline] // wrap 00:34:52.596 [Pipeline] } 00:34:52.608 [Pipeline] // catchError 00:34:52.617 [Pipeline] stage 00:34:52.620 [Pipeline] { (Epilogue) 00:34:52.632 [Pipeline] sh 00:34:52.919 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:58.209 [Pipeline] catchError 00:34:58.211 [Pipeline] { 00:34:58.224 [Pipeline] sh 00:34:58.510 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:58.510 Artifacts sizes are good 00:34:58.519 [Pipeline] } 00:34:58.533 [Pipeline] // catchError 00:34:58.543 [Pipeline] archiveArtifacts 00:34:58.550 Archiving artifacts 00:34:58.662 [Pipeline] cleanWs 00:34:58.708 [WS-CLEANUP] Deleting project workspace... 00:34:58.708 [WS-CLEANUP] Deferred wipeout is used... 00:34:58.718 [WS-CLEANUP] done 00:34:58.720 [Pipeline] } 00:34:58.735 [Pipeline] // stage 00:34:58.740 [Pipeline] } 00:34:58.753 [Pipeline] // node 00:34:58.757 [Pipeline] End of Pipeline 00:34:58.789 Finished: SUCCESS